text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 697–705, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics AraSenTi: Large-Scale Twitter-Specific Arabic Sentiment Lexicons Nora Al-Twairesh1,2, Hend Al-Khalifa2, AbdulMalik Al-Salman1 Computer Science Department1, Information Technology Department2 College of Computer and Information Sciences King Saud University {twairesh,hendk,[email protected]} Abstract Sentiment Analysis (SA) is an active research area nowadays due to the tremendous interest in aggregating and evaluating opinions being disseminated by users on the Web. SA of English has been thoroughly researched; however research on SA of Arabic has just flourished. Twitter is considered a powerful tool for disseminating information and a rich resource for opinionated text containing views on many different topics. In this paper we attempt to bridge a gap in Arabic SA of Twitter which is the lack of sentiment lexicons that are tailored for the informal language of Twitter. We generate two lexicons extracted from a large dataset of tweets using two approaches and evaluate their use in a simple lexicon based method. The evaluation is performed on internal and external datasets. The performance of these automatically generated lexicons was very promising, albeit the simple method used for classification. The best F-score obtained was 89.58% on the internal dataset and 63.1-64.7% on the external datasets. 1 Introduction The past decade has witnessed the proliferation of social media websites which has led to the production of vast amounts of unstructured text on the Web. This text can be characterized as objective, i.e. containing facts, or subjective i.e. containing opinions and sentiments about entities. Sentiment Analysis (SA) is the research field that is concerned with identifying opinions in text and classifying them as positive, negative or neutral. SA of English has been thoroughly researched; however research on SA of Arabic has just flourished. Arabic is ranked fourth among languages on the web although it is the fastest growing language on the web among other languages (Internet World Stats, 2015). Arabic is a morphologically rich language where one lemma can have hundreds of surface forms; this complicates the tasks of SA. Moreover, the Arabic language has many variants. The formal language is called Modern Standard Arabic (MSA) and the spoken language differs in different Arabic countries producing numerous Arabic dialects sometimes called informal Arabic or colloquial Arabic. The language used in social media is known to be highly dialectal (Darwish and Magdy, 2014). Dialects differ from MSA phonologically, morphologically and syntactically and they do not have standard orthographies (Habash, 2010). Consequently, resources built for MSA cannot be adapted to dialects very well. The informal language used in social media and in Twitter in particular makes the SA of tweets a challenging task. The language on social media is known to contain slang, nonstandard spellings and evolves by time. As such sentiment lexicons that are built from standard dictionaries cannot adequately capture the informal language in social media text. Therefore, in this paper we propose to generate Arabic sentiment lexicons that are tweet-specific i.e. generated from tweets. We present two approaches to generating Arabic sentiment lexicons from a large dataset of 2.2 million tweets. The lexicons are evaluated on three datasets, one internal dataset extracted from the larger dataset of tweets and two external datasets from the literature on Arabic SA. Moreover, the lexicons are compared to an external Arabic lexicon generated also from tweets. A simple lexicon-based method is used to evaluate the lexicons. This paper is organized as follows: Section 2 reviews the related work on sentiment lexicon generation. Section 3 describes the details of the datasets used to generate the lexicons and how they were collected. Section 4 presents the approaches used to generate the lexicons. Section 5 details the experimental setup while Section 6 presents and analyzes the results. Finally, we 697 conclude the paper and present potential future work in Section 7. 2 Related Work Words that convey positive or negative sentiment are fundamental for sentiment analysis. Compiling a list of these words is what is referred to as sentiment lexicon generation. There are three approaches to generate a sentiment lexicon (Liu, 2012): manual approach, dictionary-based approach, and corpus-based approach. The manual approach is usually not done alone since it is time consuming and labor intensive. It is used however, in conjunction with automated approaches to check the correction of the resulting lexicons from these approaches. In this section we review popular English and Arabic sentiment lexicons in the literature. 2.1 English Sentiment Lexicons In the dictionary based approach as the name implies a dictionary is used by utilizing the synonym and antonym lists that are associated with dictionary words. The technique starts with a small set of sentiment words as seeds with known positive or negative orientations. The seed words are looked up in the dictionary then their synonyms and antonyms are added to the seed set and a new iteration starts. The process ends when no new words are found. A manual inspection is usually done after the process ends to correct errors. A majority of studies under this approach used the WordNet with different approaches for expanding the list such as distancebased measures (Kamps, 2004; Williams and Anand, 2009) and graph-based methods (BlairGoldensohn et al., 2008; Rao and Ravichandran, 2009). Pioneering work in this approach is the construction of SentiWordNet by (Esuli and Sebastiani, 2005). Initially, they started with a set of positive seeds and a set of negative seeds then expanded the sets using the synonym and antonym relations in WordNet. This formed a training set which they used in a supervised learning classifier and applied it to all the glosses in WordNet, the process is run iteratively. Then in a following attempt (Esuli and Sebastiani, 2006), a committee of classifiers based on the previous method were used to build SentiWordNet which contains terms that are associated with three scores for objectivity, positivity and negativity, where the sum of the scores is 1. The latest version is SentiWordNet 3.0 (Baccianella et al., 2010). As for corpus-based approaches, the words of the lexicon are extracted from the corpus using a seed list of known sentiment words and different approaches to find words of similar or opposite polarity. One of the earliest work in this approach was that of (Hatzivassiloglou and McKeown, 1997), where they utilized connectives e.g. and, but, etc. between adjectives in a corpus to learn new sentiment words not in the seed list. Turney, (2002); Turney and Littman, (2002) used the once popular AltaVista search engine to find the sentiment of a certain word through calculating the association strength between the word and a set of positive words minus the association strength between the word and a set of negative words. The association strength was measured using Pointwise-Mutual Information (PMI). The result is the sentiment score of the word, if it is positive this means the word is strongly associated with positive polarity and as such its polarity will be positive and if it is negative the word’s polarity will be negative. The magnitude indicates the sentiment intensity of the word. We used PMI to generate one of the lexicons in this paper. After the emergence of sentiment analysis as an evolving research field, several lexicons were constructed according to the approaches mentioned above. In the Bing Liu’s lexicon (Hu and Liu, 2004), which falls under the dictionarybased method, the WordNet was exploited to infer the semantic orientation of adjectives extracted from customer reviews. The lexicon only provides the prior polarity of words: positive or negative, the sentiment intensity of the words was not calculated. Another popular sentiment lexicon is the MPQA subjectivity lexicon (Wilson et al., 2005) which was constructed by manually annotating the subjective expressions in the MPQA corpus. The words were annotated with four tags: positive, negative, both and neutral then further classified as strong or weak to denote intensity. We use these two lexicons in the generation of the other lexicon in this paper. With the proliferation of social media websites, the need for lexicons that can capture the peculiarities of social medial language emerges. As such, many solutions for sentiment analysis of social media and Twitter in particular initiate by developing sentiment lexicons that are extracted from Twitter (Tang et al., 2014; Kiritchenko et al., 2014). 2.2 Arabic Sentiment Lexicons 698 Generating sentiment lexicons for Arabic has gained the interest of the research community lately. Consequently, we found several efforts for generating these lexicons. A recent effort to build a large scale multi-genre multi dialect Arabic sentiment lexicon was proposed by (AbdulMageed and Diab, 2014). However, it covers only two dialects: Egyptian and Levantine and is not yet fully applied to SSA tasks. Badaro et al., (2014) constructed ArSenL, a large scale Arabic sentiment lexicon. They relied on four resources to create ArSenL: English WordNet (EWN), Arabic WordNet (AWN), English SentiWordNet (ESWN), and SAMA (Standard Arabic Morphological Analyzer). Two approaches were followed producing two different lexicons: the first approach used AWN, by mapping AWN entries into ESWN using existing offsets thus producing ArSenL-AWN. The second approach utilizes SAMA’s English glosses by finding the highest overlapping synsets between these glosses and ESWN thus producing ArSenL-Eng. Hence ArSenL is the union of these two lexicons. Although this lexicon can be considered as the largest Arabic sentiment lexicon developed to date, it is unfortunate that it only has MSA entries and no dialect words and is not developed from a social media context which could affect the accuracy when applied on social media text. Following the example of ArSenL, the lexicon SLSA (Sentiment Lexicon for Standard Arabic) (Eskander and Rambow, 2015) was constructed by linking the lexicon of an Arabic morphological analyzer Aramorph with SentiWordNet. Although the approach is very similar to ArSenL, since both use SentiWordNet to obtain the scores of words, the linking algorithm used to link the glosses in Aramorph with those in SentiWordNet is different. SLSA starts by linking every entry in Aramorph with SentiWordNet if the one-gloss word and POS match. Intrinsic and extrinsic evaluations were performed by comparing SLSA and ArSenL which demonstrated the superiority of SLSA. Nevertheless, SLSA like ArSenL does not include dialect words and cannot accurately analyze social media text. Mohammad et al., (2015), generated three Arabic lexicons from Twitter. Three datasets were collected from Twitter: the first was tweets that contained the emoticons:”:)” and “:(“, the second was tweets that contained a seed list of positive and negative Arabic words as hashtags and the third was also from tweets that contained Arabic positive and negative words as hashtags but these were dialectal words. Then using PMI three lexicons were generated from these datasets: Arabic Emoticon Lexicon, Arabic Hashtag Lexicon and Dialectal Arabic Hashtag Lexicon. Our approach in generating one of the lexicons is very similar and thus we use one of their lexicons in the experiments to compare with our lexicons. The best performing lexicon was the Dialectal Arabic Hashtag Lexicon therefore we use it in this paper to compare and evaluate our lexicons. 3 Dataset Collection We followed the approaches in previous work on SA of English Twitter to collect the datasets. As in (Go et al., 2009; Pak and Paroubek, 2010) we utilized emoticons as noisy labels to construct the first dataset EMO-TWEET. Tweets containing the emoticons: “:)” and “:(“ and the rule “lang:ar” (to retrieve Arabic tweets only) were collected during November and December 2015. The total number of Tweets collected is shown in Table 1. Davidov et al., (2010) and Kiritchenko et al., (2014) used hashtags of sentiment words such as #good and #bad to create corpora of positive and negative tweets, we adopted a similar method to theirs. Initially, we tried collecting tweets that contain Arabic sentiment words with hashtags but the search results were too low. We designated this result to a cultural difference in using hashtags between the western and eastern societies. Arabs do not use hashtags in this way. Accordingly we opted to use the sentiment words as keywords without the hashtag sign and the number of search results was substantial. Tweets containing 10 Arabic words having positive polarity and 10 Arabic words having negative polarity were collected during January 2016. The keywords are in Table 2 and the number of tweets collected in Table1. These results constitute our second dataset KEY-TWEET. Retweets, tweets containing URLs or media and tweets containing non-Arabic words were all excluded from the dataset. The reason for excluding tweets with URLs and media is that we found that most of the tweets that contain URLS and media were spam. We also noticed that although we had specified in the search query that the fetched tweets should be in Arabic “lang:ar” some of the tweets were in English and other languages. So we had to add a filter to eliminate tweets with non-Arabic characters. In total, the number of collected tweets was around 6.3 million Arabic tweets in a time span of three months. After filtration and cleaning of 699 the tweets, the remaining were 2.2 million tweets. EMO-TWEET KEY-TWEET Positive Emoticon :) Negative Emoticon :( Positive keywords Negative keywords Total number of tweets collected 2,245,054 1,272,352 1,823,517 1,000,212 After cleaning and filtering 1,033,393 407,828 447,170 337,535 Number of Tokens 12,739,308 5,082,070 9,058,412 7,135,331 Table 1: Number of collected tweets, number of tweets in datasets after cleaning and filtering and number of tokens in each dataset. Positive Keywords English Translation Negative Keywords English Translation سعادة sEAdp Happiness محزن mHzn Sad خير xyr Good مؤسف m&sf Regrettable تفاؤل tfA&l Optimism لألسف ll>sf Unfortunately أعجبني >Ejbny I like it فاشل fA$l Failing, unsuccessful نجاح njAH Success تشاؤم t$A&m Pessimism فرح frH Joy سيء sy' Bad إيجابي <yjAby Positive سلبي slby Negative جيد jyd Good إهمال <hmAl Negligence ممتاز mmtAz Excellent خطأ xT> Wrong رائع rA}E Fabulous مؤلم m&lm Painful Table 2: Positive and negative keywords used to collect tweets. 4 Lexicon Generation Two sentiment lexicons were extracted from the datasets of tweets using two different approaches. We call the first AraSenTi-Trans and the second AraSenTi-PMI. The approaches are presented in the following subsections. 4.1 AraSenTi-Trans The datasets of tweets were processed using the MADAMIRA tool (Pasha et al., 2014). MADAMIRA is a recent effort by Pasha et al. (2014) that combines some of the best aspects of two previous systems used for Arabic NLP: MADAMorphological Analysis and Disambiguation of Arabic (Habash and Rambow, 2005; Roth et al., 2008; Habash et al., 2009; Habash et al., 2013) and AMIRA (Diab et al., 2007). MADAMIRA, on the other hand, improves on these two systems with a solution that is more robust, portable, extensible, and faster. The MADAMIRA tool identifies words into three types: ARABIC, NO_ANALYSIS and NON_ARABIC. This feature was used to eliminate tweets containing non-Arabic words and to distinguish MSA words from dialect words as NO_ANALYSIS words can be identified as dialect words or misspelled words or new words made up by tweepers (twitter users). According to the POS tags provided by MADAMIRA, we extracted only nouns, adjectives, adverbs, verbs and negation particles in an effort to eliminate unwanted stop words. Then we utilized two popular English sentiment lexicons that were used in previous work on English and Arabic sentiment analysis: the Liu lexicon (Hu and Liu, 2004) and the MPQA lexicon (Wilson et al., 2005). Most previous papers on Arabic SA that used these lexicons just translated them into Arabic, yet we tried a different approach. MADAMIRA provides an English gloss for each word identified as ARABIC, the gloss could be one, two or three words. We used this gloss to compare with the Liu lexicon and MPQA lexicon using the following heuristics:  If all the word’s glosses are positive in both lexicons or found in one lexicon as positive and do not exist in the other lexicon: classify as positive.  If all the word’s glosses are negative in both lexicons or found in one lexicon as negative and do not exist in the other: classify as negative.  If the word’s glosses have different polarities in the lexicons or are (both) in MPQA: add to both list.  Else: all remaining words are classified as neutral. Although this approach could contain some errors, a manual check can be performed to clean up. The manual cleanup is time consuming but it is a one-time effort that requires only a few days (Liu, 2012). Accordingly we gave the automati700 cally generated lists of positive, negative, both, and neutral words to two Arabic native speakers to review and correct the errors. We found that 5% of the neutral words were incorrectly misclassified as neutral while they were sentiment bearing words. Also 10% of the positive words were misclassified as negative, and 15% of the negative words were misclassified as positive. The lists were corrected accordingly. We can conclude that using translated English lexicons does not always give us accurate classification of polarity. This result could be due to mistranslations or cultural differences in classifying sentiment as demonstrated by (Mohammad et al., 2015; Mobarz et al., 2014; Duwairi, 2015). Accordingly, we propose a different approach to generating another lexicon in the following section. 4.2 AraSenti-PMI The second lexicon was also generated from the dataset of tweets but through calculating the pointwise mutual information (PMI) measure for all words in the positive and negative datasets of tweets. The PMI is a measure of the strength of association between two words in a corpus, i.e. the probability of the two words to co-occur in the corpus (Church and Hanks, 1990). It has been adapted in sentiment analysis as a measure of the frequency of a word occurring in positive text to the frequency of the same word occurring in negative text. Turney, (2002); Turney and Littman, (2002) was the first work that proposed to use this measure in sentiment analysis. They used the once popular AltaVista search engine to find the sentiment of a certain word through calculating the PMI between the word and a set of positive words minus the PMI between the word and a set of negative words. Other works that used PMI to generate sentiment lexicons can be found in (Kiritchenko et al., 2014; Mohammad et al., 2015). The frequencies of the words in the positive and negative datasets of tweets were calculated respectively then the PMI was calculated for each as follows: 𝑃𝑀𝐼(𝑤, 𝑝𝑜𝑠) = log2 𝑓𝑟𝑒𝑞(𝑤,𝑝𝑜𝑠)∗𝑁 𝑓𝑟𝑒𝑞(𝑤)∗𝑓𝑟𝑒𝑞(𝑝𝑜𝑠) (1) where freq(w,pos) is the frequency of the word w in the positive tweets, freq(w) is the frequency of the word w in the dataset, freq(pos) is the total number of tokens in the positive tweets and N is the total number of tokens in the dataset. The PMI of the word associated with negative tweets is calculated in the same way PMI(w,neg). The sentiment score for word w will be: Sentiment Score(w)=PMI(w,pos)-PMI(w,neg) (2) This was calculated for all words that occurred in the dataset five times or more, the reason for this is that the PMI is a poor estimator of lowfrequency words (Kiritchenko et al., 2014), so words occurring less than 5 times were excluded. Also for words that are found in the set of positive tweets but not in the set of negative tweets or vice versa, Equation 2 would give us a sentiment score of ∞, which would highly affect the calculation of the sentiment of the whole tweet. Since the absence of a word from the negative dataset does not require that the word’s sentiment is positive or vice versa; as such we calculated the sentiment score of such words as in Equation 1, PMI(w,pos) for words occurring only in the positive tweets and PMI(w,neg) for words occurring only in the negative tweets. 4.3 Lexicons Coverage The number of positive and negative entries in each of the lexicons is shown in Table 3. The details of the lexicon of (Mohammad et al., 2015) are also shown since this lexicon will be used in the experiments in the following section for evaluation and comparison purposes. Mohammad et al., (2015) generated three lexicons, however they demonstrated that the Dialectal Arabic Hashtag Lexicon (DAHL) gave the best results and accordingly we use this lexicon in the experiments in this paper. From Table 3, we can see the high coverage of the generated lexicons AraSenti-Trans and AraSenti-PMI when compared to DAHL. In addition we manually examined the three lexicons of (Mohammad et al., 2015) and found that they were not cleaned. They contained non-Arabic words and hashtags that do not convey sentiment. This put a question mark on the validity of the lexicons and the number of entries reported. Our datasets were cleaned from non-Arabic words and punctuation, so the generated lexicons all contain valid Arabic words. Lexicon Positive Negative Total AraSenti-Trans 59,525 71,817 131,342 AraSenti-PMI 56,938 37,023 93,961 DAHL 11,947 8,179 20,126 Table 3: Details of the generated lexicons and the lexicon they will be compared to. 701 5 Evaluation To evaluate the performance of the tweetspecific lexicons, we performed a set of experiments using a simple lexicon-based approach, hence no training and/or tuning is required. We performed a two-way classification on the datasets (positive or negative). We leave the problem of three and four way classification (positive, negative, neutral, mixed) for future work. We evaluated the generated lexicons on a dataset of 10,133 tweets extracted from the larger datasets of tweets EMO-TWEET and KEYTWEET. The tweets were manually annotated by three annotators that are Arabic native speakers. The conflict between annotators was resolved by majority voting. We will call this dataset AraSenTi-Tweet. We also evaluated the generated lexicons on two external datasets of tweets: ASTD by (Nabil et al., 2015) and RR by (Refaee and Rieser, 2014). We extracted only the tweets that were labeled as positive or negative from these datasets. The details of all the datasets used in the experiments are illustrated in Table 4. We plan to release the dataset and the generated lexicons for the public. Dataset Positive Negative Total AraSenti-Tweet 4329 5804 10133 ASTD 797 1682 2479 RR 876 1941 2817 Table 4: Datasets used in the evaluation of the generated lexicons. Negation significantly affects the sentiment of its scope and consequently affects the evaluation of the lexicons. Accordingly, we propose to evaluate the generated lexicons in two settings: with and without negation handling. We also compare the performance of the generated lexicons with a lexicon that was generated in a very similar approach to one of the lexicons. Since the datasets are unbalanced, we will report the performance measures of the macroaveraged F-score (Favg), precision (P) and recall (R) of the positive and negative classes as follows: P= TP/(TP+FP) (3) R=TP/(TP+FN) (4) F=2*PR/P+R (5) where in the case of the positive class: TP is the number of positive tweets classified correctly as positive (true positive), FP is the number of negative tweets falsely classified as positive (false positive), and FN is the number of positive tweets falsely classified as negative (false negatives). The same holds for the negative class. Then the F-score is calculated as: 𝐹𝑎𝑣𝑔= 𝐹𝑝𝑜𝑠+𝐹𝑛𝑒𝑔 2 (6) 5.1 Setup A: No Negation Handling For the AraSenTi-Trans lexicon, we use the simple method of counting the number of positive and negative words in the tweet and whichever is the greatest denotes the sentiment of the tweet. The results of applying this method on the different datasets are illustrated in Table 5. As for the AraSenTi-PMI lexicon, the sentiment score of all words in the tweet were summed up. The natural threshold to classify the data into positive or negative would be zero, since positive scores denote positive sentiment and negative scores denote negative sentiment. However, according to (Kiritchenko et al., 2014) other thresholds could give better results. Consequently, we experimented with the value of this threshold. We set it to 0, 0.5,and 1 and found that the best results were obtained when setting the threshold to 1. As such if the sum of the sentiment scores of the words in a tweet is greater than one, then the tweet is classified as positive, otherwise the tweet is classified as negative. 5.2 Setup B:Negation Handling We also experimented with handling negation in the tweet, by compiling a list of negation particles found in the tweets and checking if the tweet contains a negation particle or not. For the AraSenTi-Trans lexicon, if the tweet contains a negation particle and a positive word, we do not increment the positive word counter. However, for tweets containing negative words and negation particles we found that not incrementing the negative word counter degraded the accuracy, so we opted to increment the negative word counter even if a negation particle is found in the tweet. Moreover, we experimented with adjusting the score of negation particles in the AraSenTi-PMI lexicon. After several experiments, we found that adjusting the score of the negation particles to -1 was the setting that gave the best performance. 702 6 Discussion and Results The results of the first experimental setup for the two generated lexicons AraSenti-Trans and AraSenti-PMI are presented in Table 5. For the RR dataset and AraSenti-Tweet dataset, the superiority of the AraSenti-PMI lexicon is evident. The Favg of applying the AraSenti-PMI lexicon on the RR dataset is 63.6% while the Favg of applying the AraSenti-PMI lexicon on the AraSenti-Tweet dataset is 88.92%. As for the ASTD dataset, applying the AraSenti-Trans lexicon gave better results with an Favg of 59.8%. In Table 6, the results of the lexicon-based method with negation handling are presented. The results of using the DAHL lexicon on the same datasets are also reported for comparison. First of all, the effect of negation handling on performance is significant, with increases of (14%) on all datasets. Although the two lexicons AraSenti-Trans and AraSenti-PMI handled negation differently but the increase for every dataset was almost the same: the ASTD dataset +4%, the RR dataset +1% and the AraSenti-Tweet dataset +2% and +1% respectively. When comparing the performance of the generated lexicons AraSenti-Trans and AraSentiPMI with the DAHL lexicon, we find that our lexicons presented better classification results on all datasets. Finally, although the two lexicons were extracted from the same dataset, we find that their performance varied on the different datasets. The best performance for the ASTD dataset was when the AraSenti-Trans lexicon was used. However, the best performance for the RR and AraSenti-Tweet datasets was when the AraSentiPMI lexicon was used. Moreover, albeit the simple lexicon-based method used in the evaluation, we find that the performance is encouraging. Several enhancements could be made such as incorporating Arabic valence shifters and certain linguistic rules to handle them. Lexicon DataSet AraSenti-Trans AraSenti-PMI Positve Negative Favg Positve Negative Favg P R P R P R P R ASTD 43.92 90.21 90.74 45.42 59.80 37.24 77.79 78.26 37.87 50.70 RR 40.66 89.95 89.99 40.75 56.05 46.01 73.74 83.72 60.95 63.60 AraSenti-Tweet 63.14 95.43 94.48 58.44 74.11 85.73 89.37 91.81 88.9 88.92 Table 5: Results of the first experimental setup without negation handling on the generated lexicons AraSenti-Trans and AraSenti-PMI. Lexicon DataSet AraSenti-Trans AraSenti-PMI DAHL Positve Negative Favg Positve Negative Favg Positve Negative Favg P R P R P R P R P R P R ASTD 46.24 86.32 89 52.44 63.10 38.06 56.59 73.26 56.36 54.61 36.4 43.16 70.47 64.27 53.36 RR 41.31 86.3 87.84 44.67 57.55 52.03 49.77 77.77 79.29 64.70 38.06 38.58 72.11 71.66 55.10 AraSentiTweet 66.27 90.76 90.49 65.54 76.31 91.16 84.57 89.08 93.88 89.58 76.35 62.88 75.53 85.48 74.58 Table 6: Results of the second experimental setup with negation handling on the generated lexicons AraSenti-Trans and AraSenti-PMI and on the external lexicon DAHL 7 Conclusion In this paper, two large-scale Arabic sentiment lexicons were generated from a large dataset of Arabic tweets. The significance of these lexicons lies in their ability to capture the idiosyncratic nature of social media text. Moreover, their high coverage suggests the possibility of using them in different genres such as product reviews. This is a possible future research direction. The performance of the lexicons on external datasets also suggests their ability to be used in classifying new datasets. However, there is much room for improvement given the simple method 703 used in evaluation. This simple lexicon-based method could be further enhanced by incorporating Arabic valence shifters and certain linguistic rules to handle them. We also plan to make the classification multi-way: positive, negative, neutral and mixed. Acknowledgments This Project was funded by the National Plan for Science, Technology and Innovation (MAARIFAH), King Abdulaziz City for Science and Technology, Kingdom of Saudi Arabia, Award Number (GSP-36-332). References Muhammad Abdul-Mageed and Mona Diab. 2014. SANA: A Large Scale Multi-Genre, Multi-Dialect Lexicon for Arabic Subjectivity and Sentiment Analysis. In In Proceedings of the Language Resources and Evaluation Conference (LREC), Reykjavik, Iceland. Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani. 2010. SentiWordNet 3.0: An Enhanced Lexical Resource for Sentiment Analysis and Opinion Mining. In LREC, volume 10, pages 2200–2204. Gilbert Badaro, Ramy Baly, Hazem Hajj, Nizar Habash, and Wassim El-Hajj. 2014. A large scale Arabic sentiment lexicon for Arabic opinion mining. ANLP 2014:165. Sasha Blair-Goldensohn, Kerry Hannan, Ryan McDonald, Tyler Neylon, George A Reis, and Jeff Reynar. 2008. Building a sentiment summarizer for local service reviews. In WWW Workshop on NLP in the Information Explosion Era, volume 14, pages 339–348. Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography. Computational linguistics, 16(1):22–29. Kareem Darwish and Walid Magdy. 2014. Arabic Information Retrieval. Foundations and Trends in Information Retrieval, 7(4):239–342. Dmitry Davidov, Oren Tsur, and Ari Rappoport. 2010. Enhanced sentiment learning using twitter hashtags and smileys. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 241–249. Association for Computational Linguistics. Mona Diab, Kadri Hacioglu, and Daniel Jurafsky. 2007. Automated methods for processing arabic text: from tokenization to base phrase chunking. Arabic Computational Morphology: Knowledge-based and Empirical Methods. Kluwer/Springer. Rehab M Duwairi. 2015. Sentiment analysis for dialectical Arabic. In 6th International Conference on Information and Communication Systems (ICICS), 2015, pages 166–170. IEEE. Ramy Eskander and Owen Rambow. 2015. SLSA: A Sentiment Lexicon for Standard Arabic. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2545–2550, Lisbon,Purtogal, September. ACL. Andrea Esuli and Fabrizio Sebastiani. 2005. Determining the semantic orientation of terms through gloss classification. In Proceedings of the 14th ACM international conference on Information and knowledge management, pages 617–624. ACM. Andrea Esuli and Fabrizio Sebastiani. 2006. Sentiwordnet: A publicly available lexical resource for opinion mining. In In Proceedings of the 5th Conference on Language Resources and Evaluation (LREC’06), volume 6, pages 417–422. Alec Go, Richa Bhayani, and Lei Huang. 2009. Twitter sentiment classification using distant supervision. CS224N Project Report, Stanford:1–12. Nizar Habash and Owen Rambow. 2005. Arabic tokenization, part-of-speech tagging and morphological disambiguation in one fell swoop. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 573–580. Association for Computational Linguistics. Nizar Habash, Owen Rambow, and Ryan Roth. 2009. Mada+ tokan: A toolkit for arabic tokenization, diacritization, morphological disambiguation, pos tagging, stemming and lemmatization. In Proceedings of the 2nd International Conference on Arabic Language Resources and Tools (MEDAR), Cairo, Egypt, pages 102–109. Nizar Habash, Ryan Roth, Owen Rambow, Ramy Eskander, and Nadi Tomeh. 2013. Morphological Analysis and Disambiguation for Dialectal Arabic. In HLT-NAACL, pages 426–432. Citeseer. Nizar Y Habash. 2010. Introduction to Arabic natural language processing. Synthesis Lectures on Human Language Technologies, 3(1):1–187. Vasileios Hatzivassiloglou and Kathleen R McKeown. 1997. Predicting the semantic orientation of adjectives. In Proceedings of the 35th annual meeting of the association for computational linguistics and eighth conference of the european chapter of the as704 sociation for computational linguistics, pages 174– 181. Association for Computational Linguistics. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 168– 177. ACM. Internet World Stats. 2015. Internet World Stats. November. Jaap Kamps. 2004. Using wordnet to measure semantic orientations of adjectives. In Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004). Svetlana Kiritchenko, Xiaodan Zhu, and Saif M Mohammad. 2014. Sentiment analysis of short informal texts. Journal of Artificial Intelligence Research, 50:723–762. Bing Liu. 2012. Sentiment analysis and opinion mining. Synthesis Lectures on Human Language Technologies, 5(1):1–167. Hanaa Mobarz, Mohsen Rashown, and Ibrahim Farag. 2014. Using Automated Lexical Resources in Arabic Sentence Subjectivity. International Journal of Artificial Intelligence & Applications, 5(6):1. Saif M Mohammad, Mohammad Salameh, and Svetlana Kiritchenko. 2015. How Translation Alters Sentiment. Journal of Artificial Intelligence Research, 54:1–20. Mahmoud Nabil, Mohamed Aly, and Amir F Atiya. 2015. ASTD: Arabic Sentiment Tweets Dataset. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2515–2519. Alexander Pak and Patrick Paroubek. 2010. Twitter as a Corpus for Sentiment Analysis and Opinion Mining. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC 2010), Valleta,Malta. European Language Resources Association (ELRA). Arfath Pasha, Mohamed Al-Badrashiny, Ahmed El Kholy, Ramy Eskander, Mona Diab, Nizar Habash, Manoj Pooleery, Owen Rambow, and Ryan Roth. 2014. Madamira: A fast, comprehensive tool for morphological analysis and disambiguation of arabic. In In Proceedings of the 9th International Conference on Language Resources and Evaluation, LREC 2014, Reykjavik, Iceland. European Language Resources Association (ELRA). Delip Rao and Deepak Ravichandran. 2009. Semisupervised polarity lexicon induction. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, pages 675–682. Association for Computational Linguistics. Eshrag Refaee and Verena Rieser. 2014. An Arabic Twitter Corpus for Subjectivity and Sentiment Analysis. In In Proceedings of the 9th International Conference on Language Resources and Evaluation, LREC 2014, Reykjavik, Iceland. European Language Resources Association (ELRA). Ryan Roth, Owen Rambow, Nizar Habash, Mona Diab, and Cynthia Rudin. 2008. Arabic morphological tagging, diacritization, and lemmatization using lexeme models and feature ranking. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers, pages 117–120. Association for Computational Linguistics. Duyu Tang, Furu Wei, Bing Qin, Ming Zhou, and Ting Liu. 2014. Building Large-Scale TwitterSpecific Sentiment Lexicon: A Representation Learning Approach. In COLING, pages 172–182. Peter D Turney. 2002. Thumbs up or thumbs down?: semantic orientation applied to unsupervised classification of reviews. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 417–424. Association for Computational Linguistics. Peter Turney and Michael L Littman. 2002. Unsupervised learning of semantic orientation from a hundred-billion-word corpus. Technical report, National Research Council Canada, NRC Institute for Information Technology; National Research Council Canada. Gbolahan K Williams and Sarabjot Singh Anand. 2009. Predicting the Polarity Strength of Adjectives Using WordNet. In Third International AAAI Conference on Weblogs and Social Media. Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phrase-level sentiment analysis. In Proceedings of the conference on human language technology and empirical methods in natural language processing, pages 347–354. Association for Computational Linguistics. 705
2016
66
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 706–714, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Unsupervised Multi-Author Document Decomposition Based on Hidden Markov Model Khaled Aldebei Xiangjian He Wenjing Jia Global Big Data Technologies Centre University of Technology Sydney Australia {Khaled.Aldebei,Xiangjian.He,Wenjing.Jia}@uts.edu.au Jie Yang Lab of Pattern Analysis and Machine Intelligence Shanghai Jiaotong University China [email protected] Abstract This paper proposes an unsupervised approach for segmenting a multiauthor document into authorial components. The key novelty is that we utilize the sequential patterns hidden among document elements when determining their authorships. For this purpose, we adopt Hidden Markov Model (HMM) and construct a sequential probabilistic model to capture the dependencies of sequential sentences and their authorships. An unsupervised learning method is developed to initialize the HMM parameters. Experimental results on benchmark datasets have demonstrated the significant benefit of our idea and our approach has outperformed the state-of-the-arts on all tests. As an example of its applications, the proposed approach is applied for attributing authorship of a document and has also shown promising results. 1 Introduction Authorship analysis is a process of inspecting documents in order to extract authorial information about these documents. It is considered as a general concept that embraces several types of authorship subjects, including authorship verification, plagiarism detection and author attribution. Authorship verification (Brocardo et al., 2013; Potha and Stamatatos, 2014) decides whether a given document is written by a specific author. Plagiarism detection (Stein et al., 2011; Kestemont et al., 2011) seeks to expose the similarity between two texts. However, it is unable to determine if they are written by the same author. In author attribution (Juola, 2006; Savoy, 2015), a real author of an anonymous document is predicted using labeled documents of a set of candidate authors. Another significant subject in authorship analysis, which has received comparatively less attention from research community, is authorship-based document decomposition (ABDD). This subject is to group the sentences of a multi-author document to different classes, of which each contains the sentences written by only one author. Many applications can take advantage of such a subject, especially those in forensic investigation, which aim to determine the authorship of sentences in a multi-author document. Furthermore, this kind of subject is beneficial for detecting plagiarism in a document and defining contributions of authors in a multi-author document for commercial purpose. ABDD can also be applied to identify which source (regarded as an ‘author’ in this paper) a part of a document is copied from when the document is formed by taking contents from various sources. In despite of the benefits of ABDD, there has been little research reported on this subject. Koppel et al. (2011) are the first researchers who implemented an unsupervised approach for ABDD. However, their approach is restricted to Hebrew documents only. The authors of Akiva and Koppel (2013) addressed the drawbacks of the above approach by proposing a generic unsupervised approach for ABDD. Their approach utilized distance measurements to increase the precision and accuracy of clustering and classification phases, respectively. The accuracy of their approach is highly dependent on the number of au706 thors. When the number of authors increases, the accuracy of the approach is significantly dropped. Giannella (2015) presented an improved approach for ABDD when the number of authors of the document is known or unknown. In his approach, a Bayesian segmentation algorithm is applied, which is followed by a segment clustering algorithm. However, the author tested his approach by using only documents with a few transitions among authors. Furthermore, the accuracy of the approach is very sensitive to the setting of its parameters. In Aldebei et al. (2015), the authors presented an unsupervised approach ABDD by exploiting the differences in the posterior probabilities of a Naive-Bayesian model in order to increase the precision and the classification accuracy, and to be less dependent on the number of authors in comparing with the approach in Akiva and Koppel (2013). Their work was tested on documents with up to 400 transitions among authors and the accuracy of their approach was not sensitive to the setting of parameters, in contrast with the approach in Giannella (2015). However, the performance of their approach greatly depends on a threshold, of which the optimal value for an individual document is not easy to find. Some other works have focused on segmenting a document into components according to their topics. For applications where the topics of documents are unavailable, these topicbased solutions will fail. In this paper, the ABDD approach is independent of documents’ topics. All of the existing works have assumed that the observations (i.e., sentences) are independent and identically distributed (i.i.d.). No consideration has been given to the contextual information between the observations. However, in some cases, the i.i.d. assumption is deemed as a poor one (Rogovschi et al., 2010). In this paper, we will relax this assumption and consider sentences of a document as a sequence of observations. We make use of the contextual information hidden between sentences in order to identify the authorship of each sentence in a document. In other words, the authorships of the “previous” and “subsequent” sentences have relationships with the authorship of the current sentence. Therefore, in this paper, a well-known sequential model, Hidden Markov Model (HMM), is used for modelling the sequential patterns of the document in order to describe the authorship relationships. The contributions of this article are summarized as follows. 1. We capture the dependencies between consecutive elements in a document to identify different authorial components and construct an HMM for classification. It is for the first time the sequential patterns hidden among document elements is considered for such a problem. 2. To build and learn the HMM model, an unsupervised learning method is first proposed to estimate its initial parameters, and it does not require any information of authors or document’s context other than how many authors have contributed to write the document. 3. Different from the approach in Aldebei et al. (2015), the proposed unsupervised approach no longer relies on any predetermined threshold for ABDD. 4. Comprehensive experiments are conducted to demonstrate the superior performance of our ideas on both widely-used artificial benchmark datasets and an authentic scientific document. As an example of its applications, the proposed approach is also applied for attributing authorship on a popular dataset. The proposed approach can not only correctly determine the author of a disputed document but also provide a way for measuring the confidence level of the authorship decision for the first time. The rest of this article is organised as follows. Section 2 reviews the HMM. Section 3 presents the details of our proposed approach, including the processes for initialization and learning of HMM parameters, and the Viterbi decoding process for classification. Experiments are conducted in Section 4, followed by the conclusion in Section 5. 2 Overview of HMM In this paper, we adopt the widely used sequential model, the Hidden Markov Model (HMM) (Eddy, 1996), to classify sentences of a multi-author document according to their authorship. The HMM is a probabilistic 707 model which describes the statistical dependency between a sequence of observations O = {o1, o2, · · · , oT } and a sequence of hidden states Q = {q1, q2, · · · , qT }. The observations can either be discrete variables, where each oi takes a value from a set of M symbols W = {w1, · · · , wM }, or be continuous variables. On the other hand, each qi takes one possible value from a set of N symbols, S = {s1, · · · , sN }. The behaviour of the HMM can be determined by three parameters shown as follows. 1. Initial state probabilities πππ = {π1, · · · , πN}, where πn = p(q1 = sn) and sn ∈S, for n = 1, 2, · · · , N. 2. Emission probabilities B, where each emission probability bn(ot) = p(ot|qt = sn), for t = 1, 2, · · · , T and n = 1, 2, · · · , N. 3. State transition probabilities A. It is assumed that the state transition probability has a time-homogeneous property, i.e., it is independent of the time t. Therefore, a probability p(qt = sl|qt−1 = sn) can be represented as anl, for t = 1, 2, · · · , T and l, n = 1, 2, · · · , N. 3 The Proposed Approach The ABDD proposed in this paper can be formulated as follows. Given a multi-author document C, written by N co-authors, it is assumed that each sentence in the document is written by one of the N co-authors. Furthermore, each co-author has written long successive sequences of sentences in the document. The number of authors N is known beforehand, while typically no information about the document contexts and co-authors is available. Our objective is to define the sentences of the document that are written by each co-author. Our approach consists of three steps shown as follows. 1. Estimate the initial values of the HMM parameters {πππ, B, A} with a novel unsupervised learning method. 2. Learn the values of the HMM parameters using the Baum −Welch algorithm (Baum, 1972; Bilmes and others, 1998). 3. Apply the V iterbi algorithm (Forney Jr, 1973) to find the most likely authorship of each sentence. 3.1 Initialization In our approach, we assume that we do not know anything about the document C and the authors, except the number of co-authors of the document (i.e., N). This approach applies an HMM in order to classify each sentence in document C into a class corresponding to its co-author. The step (see Sub-section 3.2) for learning of HMM parameters {πππ, B, A} is heavily dependent on the initial values of these parameters (Wu, 1983; Xu and Jordan, 1996; Huda et al., 2006). Therefore, a good initial estimation of the HMM parameters can help achieve a higher classification accuracy. We take advantage of the sequential information of data and propose an unsupervised approach to estimate the initial values of the HMM parameters. The detailed steps of this approach are shown as follows. 1. The document C is divided into segments. Each segment has 30 successive sentences, where the ith segment comprises the ith 30 successive sentences of the document. This will produce s segments, where s = Ceiling(|C|/30) with |C| representing the total number of sentences in the document. The number of sentences in each segment (i.e., 30) is chosen in such a way that each segment is long enough for representing a particular author’s writing style, and also the division of the document gives an adequate number of segments in order to be used later for estimating the initial values of HMM parameters. 2. We select the words appearing in the document for more than two times. This produces a set of D words. For each segment, create a D-dimensional vector where the ith element in the vector is one (zero) if the ith element in the selected word set does (not) appear in the segment. Therefore, s binary D-dimensional vectors are generated, and the set of these vectors is denoted by X = {x1, · · · , xs}. 3. A multivariate Gaussian Mixture Models (GMMs) (McLachlan and Peel, 2004) is used to cluster the D-dimensional vectors X into N components denoted by {s1, s2, · · · , sN}. Note that the number of components is equal to the number of co-authors of the document. Based on the GMMs, each vector, xi, gets a label representing the Gaussian component that this vector xi is assigned to, for i = 1, 2, · · · , s. 708 4. Again, we represent each segment as a binary vector using a new feature set containing all words appearing in the document for at least once. Assuming the number of elements in the new feature set is D′, s binary D′-dimensional vectors are generated, and the set of these vectors is denoted by X′ = {x′ 1, · · · , x′ s}. Each vector x′ i will have the same label of vector xi, for i = 1, 2, · · · , s. 5. We construct a Hidden Markov model with a sequence of observations O′ and its corresponding sequence of hidden states Q′. In this model, O′ represents the resulted segment vectors X′ of the previous step. Formally, observation o′ i, is the ith binary D′-dimensional vector x′ i, that represents the ith segment of document C. In contrast, Q′ represents the corresponding authors of the observation sequence O′. Each q′ i symbolizes the most likely author of observation o′ i. According to Steps 3 and 4 of this sub-section, each x′ i representing o′ i takes one label from a set of N elements, and the label represents its state, for i = 1, 2, · · · , s. By assigning the most likely states to all hidden states (i.e., q′ i, i = 1, 2, · · · , s), the state transition probabilities A are estimated. As long as there is only one sequence of states in our model, the initial probability of each state is defined as the fraction of times that the state appears in the sequence Q′, so πn = Count(q′=sn) Count(q′) , for n = 1, 2, · · · , N. 6. Given the sequence X′, and the set of all possible values of labels, the conditional probability of feature fk in X′ given a label sn, p(fk|sn), is computed, for k = 1, 2, · · · , D′ and n = 1, 2, · · · , N. 7. The document C is partitioned into sentences. Let z = |C| represent the number of sentences in the document. We represent each sentence as a binary feature vector using the same feature set used in Step 4. Therefore, z binary D′-dimensional vectors, denoted by O = {o1, · · · , oz}, are generated. By using the conditional probabilities resulted in Step 6, the initial values of B are computed as p(oi|sn) = QD′ k=1 ofk i p(fk|sn), where ofk i represents the value of feature fk in sentence vector oi, for i = 1, 2, · · · , z and n = 1, 2, · · · , N. In this approach, we use add-one smoothing (Martin and Jurafsky, 2000) for avoiding zero probabilities of A and B. Furthermore, we take the logarithm function of the probability in order to simplify its calculations. The initial values of the A, B and πππ are now available. In next sub-section, the learning process of these parameter values is performed. 3.2 Learning HMM After estimating the initial values for the parameters of HMM, we now find the parameter values that maximize likelihood of the observed data sequence (i.e., sentence sequence). The learning process of the HMM parameter values is performed as follows. 1. Construct a Hidden Markov model with a sequence of observations, O, and a corresponding sequence of hidden states, Q. In this model, O represents the resulted sentence vectors (Step 7 in the previous Sub-section). Formally, the observation oi, is the ith binary D′-dimensional vector and it represents the ith sentence of document C. In contrast, Q represents the corresponding authors of observation sequence O. Each qi symbolizes the most likelihood author of observation oi, for i = 1, 2, · · · , z 2. The Baum-Welch algorithm is applied to learn the HMM parameter values. The algorithm, also known as the forward−backward algorithm (Rabiner, 1989), has two steps, i.e., E-step and M-step. The E-step finds the expected author sequence (Q) of the observation sequence (O), and the M-step updates the HMM parameter values according to the state assignments. The learning procedure starts with the initial values of HMM parameters, and then the cycle of these two steps continues until a convergence is achieved in πππ, B and A. The learned HMM parameter values will be used in the next sub-section in order to find the best sequence of authors for the given sentences. 3.3 Viterbi Decoding For a Hidden Markov model, there are more than one sequence of states in generating the observation sequence. The Viterbi decoding algorithm (Forney Jr, 1973) is used to determine the best sequence of states for generat709 ing observation sequence. Therefore, by using the Hidden Markov model that is constructed in previous sub-section and the learned HMM parameter values, the Viterbi decoding algorithm is applied to find the best sequence of authors for the given sentences. 4 Experiments In this section, we demonstrate the performance of our proposed approach by conducting experiments on benchmark datasets as well as one authentic document. Furthermore, an application on authorship attribution is presented using another popular dataset. 4.1 Datasets Three benchmark corpora widely used for authorship analysis are used to evaluate our approach. Furthermore, an authentic document is also examined. The first corpus consists of five Biblical books written by Ezekiel, Isaiah, Jeremiah, Proverbs and Job, respectively. All of these books are written in Hebrew. The five books belong to two types of literature genres. The first three books are related to prophecy literature and the other two books are related to a wisdom literature. The second corpus consists of blogs written by the Nobel Prize-winning economist Gary S. Becker and the renowned jurist and legal scholar Richard A. Posner. This corpus, which is titled “The Becker-Posner Blogs” (www.becker-posner-blog.com), contains 690 blogs. On average, each blog has 39 sentences talking about particular topic. The Becker-Posner Blogs dataset, which is considered as a very important dataset for authorship analysis, provides a good benchmark for testing the proposed approach in a document where the topics of authors are not distinguishable. For more challenging documents, Giannella (2015) has manually selected six singletopic documents from Becker-Posner blogs. Each document is a combination of Becker and Posner blogs that are talking about only one topic. The six merged documents with their topics and number of sentences of each alternative author are shown in Table 1. The third corpus is a group of New York Times articles of four columnists. The artiTopics Author order and number of sentences per author Tenure (Ten) Posner(73), Becker(36), Posner(33), Becker(19) Senate Filibuster (SF) Posner(39), Becker(36), Posner(28), Becker(24) Tort Reform (TR) Posner(29), Becker(31), Posner(24) Profiling (Pro) Becker(35), Posner(19), Becker(21) Microfinance (Mic) Posner(51), Becker(37), Posner(44), Becker(33) Traffic Congestion (TC) Becker(57), Posner(33), Becker(20) Table 1: The 6 merged single-topic documents of Becker-Posner blogs. cles are subjected to different topics. In our experiments, all possible multi-author documents of articles of these columnists are created. Therefore, this corpus permits us to examine the performance of our approach in documents written by more than two authors. The fourth corpus is a very early draft of a scientific article co-authored by two PhD students each being assigned a task to write some full sections of the paper. We employ this corpus in order to evaluate the performance of our approach on an authentic document. For this purpose, we have disregarded its titles, author names, references, figures and tables. After that, we get 313 sentences which are written by two authors, where Author 1 has written 131 sentences and Author 2 has written 182 sentences. 4.2 Results on Document Decomposition The performance of the proposed approach is evaluated through a set of comparisons with four state-of-the-art approaches on the four aforementioned datasets. The experiments on the first three datasets, excluding the six single-topic documents, are applied using a set of artificially merged multiauthor documents. These documents are created by using the same method that has been used by Aldebei et al. (2015). This method aims to combine a group of documents of N authors into a single merged document. Each of these documents is written by only one author. The merged document process starts by selecting a random author from an author set. Then, the first r successive and unchosen sentences from the documents of the selected author are gleaned, and are merged with the first r successive and unchosen sentences from the documents of another randomly selected au710 thor. This process is repeated till all sentences of authors’ documents are gleaned. The value of r of each transition is selected randomly from a uniform distribution varying from 1 to V . Furthermore, we follow Aldebei et al. (2015) method and assign the value of 200 to V . Bible Books We utilize the bible books of five authors and create artificial documents by merging books of any two possible authors. This produces 10 multi-author documents of which four have the same type of literature and six have different type of literature. Table 2 shows the comparisons of classification accuracies of these 10 documents by using our approach and the approaches developed by Koppel et al. (2011), Akiva and Koppel (2013)-500CommonWords, Akiva and Koppel (2013)-SynonymSet and Aldebei et al. (2015). Doc. 1 2 3 4 5 Different Eze-Job 85.8% 98.9% 95.0% 99.0% 99.4% Eze-Prov 77.0% 99.0% 91.0% 98.0% 98.8% Isa-Prov 71.0% 95.0% 85.0% 98.0% 98.7% Isa-Job 83.0% 98.8% 89.0% 99.0% 99.4% Jer-Job 87.2% 98.2% 93.0% 98.0% 98.5% Jer-Prov 72.2% 97.0% 75.0% 99.0% 99.5% Overall 79.4% 97.8% 88.0% 98.5% 99.1% Same Job-Prov 85.0% 94.0% 82.0% 95.0% 98.2% Isa-Jer 72.0% 66.9% 82.9% 71.0% 72.1% Isa-Eze 79.0% 80.0% 88.0% 83.0% 83.2% Jer-Eze 82.0% 97.0% 96.0% 97.0% 97.3% Overall 79.5% 84.5% 87.2% 86.5% 87.7% Table 2: Classification accuracies of merged documents of different literature or the same literature bible books using the approaches of 1- Koppel et al. (2011), 2- Akiva and Koppel (2013)-500CommonWords, 3- Akiva and Koppel (2013)-SynonymSet, 4- Aldebei et al. (2015) and 5- our approach. As shown in Table 2, the results of our approach are very promising. The overall classification accuracies of documents of the same literature or different literature are better than the other four state-of-the-art approaches. In our approach, we have proposed an unsupervised method to estimate the initial values of the HMM parameters (i.e., πππ, B and A) using segments. Actually, the initial values of the HMM parameters are sensitive factors to the convergence and accuracy of the learning process. Most of the previous works using HMM have estimated these values by clustering the original data, i.e., they have clustered sentences rather than segments. Figure 1 compares the results of using segments with the results of using sentences for estimating the initial parameters of HMM in the proposed approach for the 10 merged Bible documents in terms of the accuracy results and number of iterations till convergence, respectively. From Figures 1, one can notice that the accuracy results obtained by using segments for estimating the initial HMM parameters are significantly higher than using sentences for all merged documents. Furthermore, the number of iterations required for convergence for each merged document using segments is significantly smaller than using sentences. Figure 1: Comparisons between using segments and using sentences in the unsupervised method for estimating the initial values of the HMM of our approach in terms of accuracy (representd as the cylinders) and number of iterations required for convergence (represented as the numbers above cylinders) using the 10 merged Bible documents. Becker-Posner Blogs (Controlling for Topics) In our experiments, we represent BeckerPosner blogs in two different terms. The first term is as in Aldebei et al. (2015) and Akiva and Koppel (2013) approaches, where the whole blogs are exploited to create one merged document. The resulted merged document contains 26,922 sentences and more than 240 switches between the two authors. We obtain an accuracy of 96.72% when testing our approach in the merged document. The obtained result of such type of document, which does not have topic indications to differentiate between authors, is delightful. The first set of cylinders labelled “Becker-Posner” in Figure 2 shows the comparisons of classification accuracies of our approach and the approaches of Akiva and Koppel (2013) and Aldebei et al. 711 (2015) when the whole blogs are used to create one merged document. As shown in Figure 2, our approach yields better classification accuracy than the other two approaches. Figure 2: Classification accuracy comparisons between our approach and the approaches presented in Akiva and Koppel (2013) and Aldebei et al. (2015) in Becker-Posner documents, and documents created by three or four New York Times columnists (TF = Thomas Friedman, PK = Paul Krugman, MD = Maureeen Dowd, GC = Gail Collins). The second term is as in the approach of Giannella (2015), where six merged single-topic documents are formed. Due to comparatively shorter lengths of these documents, the number of resulted segments that are used for the unsupervised learning in Sub-section 3.1 is clearly not sufficient. Therefore, instead of splitting each document into segments of 30 sentences length each, we split it into segments of 10 sentences length each. Figure 3 shows the classification accuracies of the six documents using our approach and the approach presented in Giannella (2015). It is observed that our proposed approach has achieved higher classification accuracy than Giannella (2015) in all of the six documents. Figure 3: Classification accuracy comparisons between our approach and the approach presented in (Giannella, 2015) in the six singletopic documents of Becker-Posner blogs. New York Times Articles (N > 2) We perform our approach on New York Times articles. For this corpus, the experiments can be classified into three groups. The first group is for those merged documents that are created by combining articles of any pair of the four authors. The six resulted documents have on average more than 250 switches between authors. The classification accuracies of these documents are between 93.9% and 96.3%. It is notable that the results are very satisfactory for all documents. For comparisons, the classification accuracies of the same documents using the approach presented in Aldebei et al. (2015) range from 93.3% to 96.1%. Furthermore, some of these documents have produced an accuracy lower than 89.0% using the approach of Akiva and Koppel (2013). The second group is for those merged documents that are created by combining articles of any three of the four authors. The four resulted documents have on average more than 350 switches among the authors. The third group is for the document that are created by combining articles of all four columnists. The resulted merged document has 46,851 sentences and more than 510 switches among authors. Figure 2 shows the accuracies of the five resulted documents regarding the experiments of the last two groups. Furthermore, it shows the comparisons of our approach and the approaches presented in Aldebei et al. (2015) and Akiva and Koppel (2013). It is noteworthy that the accuracies of our approach are better than the other two approaches in all of the five documents. Authentic Document In order to demonstrate that our proposed approach is applicable on genuine documents as well, we have applied the approach on first draft of a scientific paper written by two Ph.D. students (Author 1 and Author 2) in our research group. Each student was assigned a task to write some full sections of the paper. Author 1 has contributed 41.9% of the document and Author 2 contributed 58.1%. Table 3 shows the number of correctly assigned sentences of each author and the classification accuracy resulted using the proposed approach. Table 3 also displays the authors’ contributions predicted using our approach. As 712 Author Classification Accuracy Predicted Contribution 1 98.5% 47.6% 2 89.0% 52.4% Accuracy 93.0% Table 3: The classification accuracies and predicted contributions of the two authors of the scientific paper using the proposed approach. shown in Table 3, the proposed approach has achieved an overall accuracy of 93.0% for the authentic document. 4.3 Results on Authorship Attribution One of the applications that can take advantage of the proposed approach is the authorship attribution (i.e., determining a real author of an anonymous document given a set of labeled documents of candidate authors). The Federalist Papers dataset have been employed in order to examine the performance of our approach for this application. This dataset is considered as a benchmark in authorship attribution task and has been used in many studies related to this task (Juola, 2006; Savoy, 2013; Savoy, 2015). The Federalist Papers consist of 85 articles published anonymously between 1787 and 1788 by Alexander Hamilton, James Madison and John Jay to persuade the citizens of the State of New York to ratify the Constitution. Of the 85 articles, 51 of them were written by Hamilton, 14 were written by Madison and 5 were written by Jay. Furthermore, 3 more articles were written jointly by Hamilton and Madison. The other 12 articles (i.e., articles 49-58 and 62-63), the famous “anonymous articles”, have been alleged to be written by Hamilton or Madison. To predict a real author of the 12 anonymous articles, we use the first five undisputed articles of both authors, Hamilton and Madison. Note that we ignore the articles of Jay because the anonymous articles are alleged to be written by Hamilton or Madison. The five articles of Hamilton (articles 1 and 6-9) are combined with the five articles of Madison (articles 10, 14 and 37-39) in a single merged document where all the articles of Hamilton are inserted into the first part of the merged document and all the articles of Madison are inserted into the second part of the merged document. The merged document has 10 undisputed articles covering eight different topics (i.e., each author has four different topics). Before applying the authorship attribution on the 12 anonymous articles, we have tested our approach on the resulted merged document and an accuracy of 95.2% is achieved in this document. Note that, the authorial components in this document are not thematically notable. For authorship attribution of the 12 anonymous articles, we add one anonymous article each time on the middle of the merged document, i.e., between Hamilton articles part and Madison articles part. Then, we apply our approach on the resulted document, which has 11 articles, to determine to which part the sentences of the anonymous article are classified to be sectences of Hamilton or Madison. As the ground truth for our experiments, all of these 12 articles can be deemed to have been written by Madison becuase the results of all recent state-of-the-art studies testing on these articles on authorship attribution have classified the articles to Madison’s. Consistent with the state-of-the-art approaches, these 12 anonymous articles are also correctly classified to be Madison’s using the proposed approach. Actually, all sentences of articles 50,52-58 and 62-63 are classified as Madison’s sentences, and 81% of the sentences of article 49 and 80% of article 51 are classified as Madison’s sentences. These percentages can be deemed as the confidence levels (i.e., 80% conferdence for articles 49, 81% for 51, and 100% confidences for all other articles) in making our conclusion of the authorship contributions. 5 Conclusions We have developed an unsupervised approach for decomposing a multi-author document based on authorship. Different from the stateof-the-art approaches, we have innovatively made use of the sequential information hidden among document elements. For this purpose, we have used HMM and constructed a sequential probabilistic model, which is used to find the best sequence of authors that represents the sentences of the document. An unsupervised learning method has also been developed to estimate the initial parameter values of HMM. Comparative experiments conducted on benchmark datasets have demonstrated the effectiveness of our ideas with superior perfor713 mance achieved on both artificial and authentic documents. An application of the proposed approach on authorship attribution has also achieved perfect results of 100% accuracies together with confidence measurement for the first time. References [Akiva and Koppel2013] Navot Akiva and Moshe Koppel. 2013. A generic unsupervised method for decomposing multi-author documents. Journal of the American Society for Information Science and Technology, 64(11):2256–2264. [Aldebei et al.2015] Khaled Aldebei, Xiangjian He, and Jie Yang. 2015. Unsupervised decomposition of a multi-author document based on naivebayesian model. ACL, Volume 2: Short Papers, page 501. [Baum1972] Leonard E Baum. 1972. An equality and associated maximization technique in statistical estimation for probabilistic functions of markov processes. Inequalities, 3:1–8. [Bilmes and others1998] JeffA Bilmes et al. 1998. A gentle tutorial of the em algorithm and its application to parameter estimation for gaussian mixture and hidden markov models. International Computer Science Institute, 4(510):126. [Brocardo et al.2013] Marcelo Luiz Brocardo, Issa Traore, Shatina Saad, and Isaac Woungang. 2013. Authorship verification for short messages using stylometry. In Computer, Information and Telecommunication Systems (CITS), 2013 International Conference on, pages 1–6. IEEE. [Eddy1996] Sean R Eddy. 1996. Hidden markov models. Current opinion in structural biology, 6(3):361–365. [Forney Jr1973] G David Forney Jr. 1973. The viterbi algorithm. Proceedings of the IEEE, 61(3):268–278. [Giannella2015] Chris Giannella. 2015. An improved algorithm for unsupervised decomposition of a multi-author document. Journal of the Association for Information Science and Technology. [Huda et al.2006] Md Shamsul Huda, Ranadhir Ghosh, and John Yearwood. 2006. A variable initialization approach to the em algorithm for better estimation of the parameters of hidden markov model based acoustic modeling of speech signals. In Advances in Data Mining. Applications in Medicine, Web Mining, Marketing, Image and Signal Mining, pages 416–430. Springer. [Juola2006] Patrick Juola. 2006. Authorship attribution. Foundations and Trends in information Retrieval, 1(3):233–334. [Kestemont et al.2011] Mike Kestemont, Kim Luyckx, and Walter Daelemans. 2011. Intrinsic plagiarism detection using character trigram distance scores. Proceedings of the PAN. [Koppel et al.2011] Moshe Koppel, Navot Akiva, Idan Dershowitz, and Nachum Dershowitz. 2011. Unsupervised decomposition of a document into authorial components. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 1356– 1364. Association for Computational Linguistics. [Martin and Jurafsky2000] James H Martin and Daniel Jurafsky. 2000. Speech and language processing. International Edition. [McLachlan and Peel2004] Geoffrey McLachlan and David Peel. 2004. Finite mixture models. John Wiley & Sons. [Potha and Stamatatos2014] Nektaria Potha and Efstathios Stamatatos. 2014. A profile-based method for authorship verification. In Artificial Intelligence: Methods and Applications, pages 313–326. Springer. [Rabiner1989] Lawrence R Rabiner. 1989. A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257–286. [Rogovschi et al.2010] Nicoleta Rogovschi, Mustapha Lebbah, and Younes Bennani. 2010. Learning self-organizing mixture markov models. Journal of Nonlinear Systems and Applications, 1:63–71. [Savoy2013] Jacques Savoy. 2013. The federalist papers revisited: A collaborative attribution scheme. Proceedings of the American Society for Information Science and Technology, 50(1):1–8. [Savoy2015] Jacques Savoy. 2015. Estimating the probability of an authorship attribution. Journal of the Association for Information Science and Technology. [Stein et al.2011] Benno Stein, Nedim Lipka, and Peter Prettenhofer. 2011. Intrinsic plagiarism analysis. Language Resources and Evaluation, 45(1):63–82. [Wu1983] CF JeffWu. 1983. On the convergence properties of the em algorithm. The Annals of statistics, pages 95–103. [Xu and Jordan1996] Lei Xu and Michael I Jordan. 1996. On convergence properties of the em algorithm for gaussian mixtures. Neural computation, 8(1):129–151. 714
2016
67
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 715–725, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Automatic Text Scoring Using Neural Networks Dimitrios Alikaniotis Department of Theoretical and Applied Linguistics University of Cambridge Cambridge, UK [email protected] Helen Yannakoudakis The ALTA Institute Computer Laboratory University of Cambridge Cambridge, UK [email protected] Marek Rei The ALTA Institute Computer Laboratory University of Cambridge Cambridge, UK [email protected] Abstract Automated Text Scoring (ATS) provides a cost-effective and consistent alternative to human marking. However, in order to achieve good performance, the predictive features of the system need to be manually engineered by human experts. We introduce a model that forms word representations by learning the extent to which specific words contribute to the text’s score. Using Long-Short Term Memory networks to represent the meaning of texts, we demonstrate that a fully automated framework is able to achieve excellent results over similar approaches. In an attempt to make our results more interpretable, and inspired by recent advances in visualizing neural networks, we introduce a novel method for identifying the regions of the text that the model has found more discriminative. 1 Introduction Automated Text Scoring (ATS) refers to the set of statistical and natural language processing techniques used to automatically score a text on a marking scale. The advantages of ATS systems have been established since Project Essay Grade (PEG) (Page, 1967; Page, 1968), one of the earliest systems whose development was largely motivated by the prospect of reducing labour-intensive marking activities. In addition to providing a cost-effective and efficient approach to large-scale grading of (extended) text, such systems ensure a consistent application of marking criteria, therefore facilitating equity in scoring. There is a large body of literature with regards to ATS systems of text produced by nonnative English-language learners (Page, 1968; Attali and Burstein, 2006; Rudner and Liang, 2002; Elliot, 2003; Landauer et al., 2003; Briscoe et al., 2010; Yannakoudakis et al., 2011; Sakaguchi et al., 2015, among others), overviews of which can be found in various studies (Williamson, 2009; Dikli, 2006; Shermis and Hammer, 2012). Implicitly or explicitly, previous work has primarily treated text scoring as a supervised text classification task, and has utilized a large selection of techniques, ranging from the use of syntactic parsers, via vectorial semantics combined with dimensionality reduction, to generative and discriminative machine learning. As multiple factors influence the quality of texts, ATS systems typically exploit a large range of textual features that correspond to different properties of text, such as grammar, vocabulary, style, topic relevance, and discourse coherence and cohesion. In addition to lexical and part-ofspeech (POS) ngrams, linguistically deeper features such as types of syntactic constructions, grammatical relations and measures of sentence complexity are among some of the properties that form an ATS system’s internal marking criteria. The final representation of a text typically consists of a vector of features that have been manually selected and tuned to predict a score on a marking scale. Although current approaches to scoring, such as regression and ranking, have been shown to achieve performance that is indistinguishable from that of human examiners, there is substantial manual effort involved in reaching these results on different domains, genres, prompts and so forth. Linguistic features intended to capture the aspects of writing to be assessed are hand-selected and tuned for specific domains. In order to perform well on different data, separate models with distinct feature sets are typically tuned. 715 Prompted by recent advances in deep learning and the ability of such systems to surpass state-ofthe-art models in similar areas (Tang, 2015; Tai et al., 2015), we propose the use of recurrent neural network models for ATS. Multi-layer neural networks are known for automatically learning useful features from data, with lower layers learning basic feature detectors and upper levels learning more high-level abstract features (Lee et al., 2009). Additionally, recurrent neural networks are well-suited for modeling the compositionality of language and have been shown to perform very well on the task of language modeling (Mikolov et al., 2011; Chelba et al., 2013). We therefore propose to apply these network structures to the task of scoring, in order to both improve the performance of ATS systems and learn the required feature representations for each dataset automatically, without the need for manual tuning. More specifically, we focus on predicting a holistic score for extended-response writing items.1 However, automated models are not a panacea, and their deployment depends largely on the ability to examine their characteristics, whether they measure what is intended to be measured, and whether their internal marking criteria can be interpreted in a meaningful and useful way. The deep architecture of neural network models, however, makes it rather difficult to identify and extract those properties of text that the network has identified as discriminative. Therefore, we also describe a preliminary method for visualizing the information the model is exploiting when assigning a specific score to an input text. 2 Related Work In this section, we describe a number of the more influential and/or recent approaches in automated text scoring of non-native English-learner writing. Project Essay Grade (Page, 1967; Page, 1968; Page, 2003) is one of the earliest automated scoring systems, predicting a score using linear regression over vectors of textual features considered to be proxies of writing quality. Intelligent Essay Assessor (Landauer et al., 2003) uses Latent Semantic Analysis to compute the semantic similarity between texts at specific grade points and a test text, which is assigned a score based on the ones in 1The task is also referred to as Automated Essay Scoring. Throughout this paper, we use the terms text and essay (scoring) interchangeably. the training set to which it is most similar. Lonsdale and Strong-Krause (2003) use the Link Grammar parser (Sleator and Templerley, 1995) to analyse and score texts based on the average sentencelevel scores calculated from the parser’s cost vector. The Bayesian Essay Test Scoring sYstem (Rudner and Liang, 2002) investigates multinomial and Bernoulli Naive Bayes models to classify texts based on shallow content and style features. eRater (Attali and Burstein, 2006), developed by the Educational Testing Service, was one of the first systems to be deployed for operational scoring in high-stakes assessments. The model uses a number of different features, including aspects of grammar, vocabulary and style (among others), whose weights are fitted to a marking scheme by regression. Chen et al. (2010) use a voting algorithm and address text scoring within a weakly supervised bag-of-words framework. Yannakoudakis et al. (2011) extract deep linguistic features and employ a discriminative learning-to-rank model that outperforms regression. Recently, McNamara et al. (2015) used a hierachical classification approach to scoring, utilizing linguistic, semantic and rhetorical features, among others. Farra et al. (2015) utilize variants of logistic and linear regression and develop models that score persuasive essays based on features extracted from opinion expressions and topical elements. There have also been attempts to incorporate more diverse features to text scoring models. Klebanov and Flor (2013) demonstrate that essay scoring performance is improved by adding to the model information about percentages of highly associated, mildly associated and dis-associated pairs of words that co-exist in a given text. Somasundaran et al. (2014) exploit lexical chains and their interaction with discourse elements for evaluating the quality of persuasive essays with respect to discourse coherence. Crossley et al. (2015) identify student attributes, such as standardized test scores, as predictive of writing success and use them in conjunction with textual features to develop essay scoring models. In 2012, Kaggle,2 sponsored by the Hewlett Foundation, hosted the Automated Student Assessment Prize (ASAP) contest, aiming to demon2http://www.kaggle.com/c/asap-aes/ 716 strate the capabilities of automated text scoring systems (Shermis, 2015). The dataset released consists of around twenty thousand texts (60% of which are marked), produced by middle-school English-speaking students, which we use as part of our experiments to develop our models. 3 Models 3.1 C&W Embeddings Collobert and Weston (2008) and Collobert et al. (2011) introduce a neural network architecture (Fig. 1a) that learns a distributed representation for each word w in a corpus based on its local context. Concretely, suppose we want to learn a representation for some target word wt found in an n-sized sequence of words S = (w1, . . . , wt, . . . , wn) based on the other words which exist in the same sequence (∀wi ∈S | wi ̸= wt). In order to derive this representation, the model learns to discriminate between S and some ‘noisy’ counterpart S′ in which the target word wt has been substituted for a randomly sampled word from the vocabulary: S′ = (w1, . . . , wc, . . . , wn | wc ∼V). In this way, every word w is more predictive of its local context than any other random word in the corpus. Every word in V is mapped to a real-valued vector in Ωvia a mapping function C(·) such that C(wi) = ⟨M⋆i⟩, where M ∈RD×|V| is the embedding matrix and ⟨M⋆i⟩is the ith column of M. The network takes S as input by concatenating the vectors of the words found in it; st = ⟨C(w1)⊺∥. . . ∥C(wt)⊺∥. . . ∥C(wn)⊺⟩∈ RnD. Similarly, S′ is formed by substituting C(wt) for C(wc) ∼M | wc ̸= wt. The input vector is then passed through a hard tanh layer defined as, htanh(x) =      −1 x < −1 x −1 ⩽x ⩽1 1 x > 1 (1) which feeds a single linear unit in the output layer. The function that is computed by the network is ultimately given by (4): st = ⟨M⊺ ⋆1∥. . . ∥M⊺ ⋆t∥. . . ∥M⊺ ⋆n⟩⊺ (2) i = σ(Whist + bh) (3) f(st) = Wohi + bo (4) f(s), bo ∈R1 Woh ∈RH×1 Whi ∈RD×H s ∈RD bo ∈RH where M, Woh, Whi, bo, bh are learnable parameters, D, H are hyperparameters controlling the size of the input and the hidden layer, respectively; σ is the application of an element-wise non-linear function (htanh in this case). The model learns word embeddings by ranking the activation of the true sequence S higher than the activation of its ‘noisy’ counterpart S′. The objective of the model then becomes to minimize the hinge loss which ensures that the activations of the original and ‘noisy’ ngrams will differ by at least 1: losscontext(target, corrupt) = [1 −f(st) + f(sck)]+, ∀k ∈ZE (5) where E is another hyperparameter controlling the number of ‘noisy’ sequences we give along with the correct sequence (Mikolov et al., 2013; Gutmann and Hyv¨arinen, 2012). 3.2 Augmented C&W model Following Tang (2015), we extend the previous model to capture not only the local linguistic environment of each word, but also how each word contributes to the overall score of the essay. The aim here is to construct representations which, along with the linguistic information given by the linear order of the words in each sentence, are able to capture usage information. Words such as is, are, to, at which appear with any essay score are considered to be under-informative in the sense that they will activate equally both on high and low scoring essays. Informative words, on the other hand, are the ones which would have an impact on the essay score (e.g., spelling mistakes). In order to capture those score-specific word embeddings (SSWEs), we extend (4) by adding a further linear unit in the output layer that performs linear regression, predicting the essay score. Using (2), the activations of the network (presented in Fig. 1b) are given by: 717 ... ... ... the recent advances (a) ... ... ... the recent advances (b) Figure 1: Architecture of the original C&W model (left) and of our extended version (right). fss(s) = Woh1i + bo1 (6) fcontext(s) = Woh2i + bo2 (7) fss(s) ∈[min(score), max(score)] bo1 ∈R1 Woh1 ∈R1×H The error we minimize for fss (where ss stands for score specific) is the mean squared error between the predicted ˆy and the actual essay score y: lossscore(s) = 1 N N X i=1 (ˆyi −yi)2 (8) From (5) and (8) we compute the overall loss function as a weighted linear combination of the two loss functions (9), back-propagating the error gradients to the embedding matrix M: lossoverall(s) = α · losscontext(s, s′) + (1 −α) · lossscore(s) (9) where α is the hyper-parameter determining how the two error functions should be weighted. α values closer to 0 will place more weight on the scorespecific aspect of the embeddings, whereas values closer to 1 will favour the contextual information. Fig. 2 shows the advantage of using SSWEs in the present setting. Based solely on the information provided by the linguistic environment, words such as computer and laptop are going to be placed together with their mis-spelled counterparts copmuter and labtop (Fig. 2a). This, however, does not reflect the fact that the mis-spelled words tend to appear in lower scoring essays. Using SSWEs, the correctly spelled words are pulled apart in the vector space from the incorrectly spelled ones, retaining, however, the information that labtop and copmuter are still contextually related (Fig. 2b). 3.3 Long-Short Term Memory Network We use the SSWEs obtained by our model to derive continuous representations for each essay. We treat each essay as a sequence of tokens and explore the use of uni- and bi-directional (Graves, 2012) Long-Short Term Memory networks (LSTMs) (Hochreiter and Schmidhuber, 1997) in order to embed these sequences in a vector of fixed size. Both uni- and bi-directional LSTMs have been effectively used for embedding long sequences (Hermann et al., 2015). LSTMs are a kind of recurrent neural network (RNN) architecture in which the output at time t is conditioned on the input s both at time t and at time t −1: yt = Wyhht + by (10) ht = H(Whsst + Whhht−1 + bh) (11) where st is the input at time t, and H is usually an element-wise application of a non-linear function. In LSTMs, H is substituted for a composite function defining ht as: it = σ(Wisst + Wihht−1+ Wicct−1 + bi) (12) ft = σ(Wfsst + Wfhht−1+ Wfcct−1 + bf) (13) ct = it ⊙g(Wcsst + Wchht−1 + bc)+ ft ⊙ct−1 (14) 718 1 2 3 4 1 2 3 COPMUTAR COMPUTER LAPTOP LABTOP (a) Standard neural embeddings 1 2 3 4 1 2 3 COPMUTAR COMPUTER LAPTOP LABTOP (b) Score-specific word embeddings Figure 2: Comparison between standard and score-specific word embeddings. By virtue of appearing in similar environments, standard neural embeddings will place the correct and the incorrect spelling closer in the vector space. However, since the mistakes are found in lower scoring essays, SSWEs are able to discriminate between the correct and the incorrect versions without loss in contextual meaning. the wthe the wthe recent wrecent recent wrecent advances wadvances advances wadvances ... w... ... w... −→ h ←− h y Figure 3: A single-layer Long Short Term Memory (LSTM) network. The word vectors wi enter the input layer one at a time. The hidden layer that has been formed at the last timestep is used to predict the essay score using linear regression. We also explore the use of bi-directional LSTMs (dashed arrows). For ‘deeper’ representations, we can stack more LSTM layers after the hidden layer shown here. ot = σ(Wosst + Wohht−1+ Wocct + bo) (15) ht = ot ⊙h(ct) (16) where g, σ and h are element-wise non-linear functions such as the logistic sigmoid ( 1 1+e−x ) and the hyperbolic tangent (e2z−1 e2z+1); ⊙is the Hadamard product; W, b are the learned weights and biases respectively; and i, f, o and c are the input, forget, output gates and the cell activation vectors respectively. Training the LSTM in a uni-directional manner (i.e., from left to right) might leave out important information about the sentence. For example, our interpretation of a word at some point ti might be different once we know the word at ti+5. An effective way to get around this issue has been to train the LSTM in a bidirectional manner. This requires doing both a forward and a backward pass of the sequence (i.e., feeding the words from left to right and from right to left). The hidden layer element in (10) can therefore be re-written as the concatenation of the forward and backward hidden vectors: yt = Wyh ←− h ⊺ t −→ h ⊺ t ! + by (17) We feed the embedding of each word found in each essay to the LSTM one at a time, zero-padding shorter sequences. We form Ddimensional essay embeddings by taking the activation of the LSTM layer at the timestep where the last word of the essay was presented to the network. In the case of bi-directional LSTMs, the two independent passes of the essay (from left to right and from right to left) are concatenated together to predict the essay score. These essay embeddings are then fed to a linear unit in the output layer which predicts the essay score (Fig. 3). We use the mean square error between the predicted and the gold score as our loss function, and optimize with RMSprop (Dauphin et al., 2015), propagating the errors back to the word embeddings.3 3The maximum time for jointly training a particular SSWE + LSTM combination took about 55–60 hours on an Amazon EC2 g2.2xlarge instance (average time was 27–30 hours). 719 3.4 Other Baselines We train a Support Vector Regression model (see Section 4), which is one of the most widely used approaches in text scoring. We parse the data using the RASP parser (Briscoe et al., 2006) and extract a number of different features for assessing the quality of the essays. More specifically, we use character and part-of-speech unigrams, bigrams and trigrams; word unigrams, bigrams and trigrams where we replace open-class words with their POS; and the distribution of common nouns, prepositions, and coordinators. Additionally, we extract and use as features the rules from the phrase-structure tree based on the top parse for each sentence, as well as an estimate of the error rate based on manually-derived error rules. Ngrams are weighted using tf–idf, while the rest are count-based and scaled so that all features have approximately the same order of magnitude. The final input vectors are unit-normalized to account for varying text-length biases. Further to the above, we also explore the use of the Distributed Memory Model of Paragraph Vectors (PV-DM) proposed by Le and Mikolov (2014), as a means to directly obtain essay embeddings. PV-DM takes as input word vectors which make up ngram sequences and uses those to predict the next word in the sequence. A feature of PV-DM, however, is that each ‘paragraph’ is assigned a unique vector which is used in the prediction. This vector, therefore, acts as a ‘memory’, retaining information from all contexts that have appeared in this paragraph. Paragraph vectors are then fed to a linear regression model to obtain essay scores (we refer to this model as doc2vec). Additionally, we explore the effect of our scorespecific method for learning word embeddings, when compared against three different kinds of word embeddings: • word2vec embeddings (Mikolov et al., 2013) trained on our training set (see Section 4). • Publicly available word2vec embeddings (Mikolov et al., 2013) pre-trained on the Google News corpus (ca. 100 billion words), which have been very effective in capturing solely contextual information. • Embeddings that are constructed on the fly by the LSTM, by propagating the errors from its hidden layer back to the embedding matrix (i.e., we do not provide any pre-trained word embeddings).4 4 Dataset The Kaggle dataset contains 12.976 essays ranging from 150 to 550 words each, marked by two raters (Cohen’s κ = 0.86). The essays were written by students ranging from Grade 7 to Grade 10, comprising eight distinct sets elicited by eight different prompts, each with distinct marking criteria and score range.5 For our experiments, we use the resolved combined score between the two raters, which is calculated as the average between the two raters’ scores (if the scores are close), or is determined by a third expert (if the scores are far apart). Currently, the state-of-the-art on this dataset has achieved a Cohen’s κ = 0.81 (using quadratic weights). However, the test set was released without the gold score annotations, rendering any comparisons futile, and we are therefore restricted in splitting the given training set to create a new test set. The sets where divided as follows: 80% of the entire dataset was reserved for training/validation, and 20% for testing. 80% of the training/validation subset was used for actual training, while the remaining 20% for validation (in absolute terms for the entire dataset: 64% training, 16% validation, 20% testing). To facilitate future work, we release the ids of the validation and test set essays we used in our experiments, in addition to our source code and various hyperparameter values.6 5 Experiments 5.1 Results The hyperparameters for our model were as follows: sizes of the layers H, D, the learning rate η, the window size n, the number of ‘noisy’ sequences E and the weighting factor α. Also the hyperparameters of the LSTM were the size of the LSTM layer DLSTM as well as the dropout rate r. 4Another option would be to use standard C&W embeddings; however, this is equivalent to using SSWEs with α = 1, which we found to produce low results. 5Five prompts employed a holistic scoring rubric, one was scored with a two-trait rubric, and two were scored with a multi-trait rubric, but reported as a holistic score (Shermis and Hammer, 2012). 6The code, by-model hyperparameter configurations and the IDs of the testing set are available at https:// github.com/dimalik/ats/. 720 Model Spearman’s ρ Pearson r RMSE Cohen’s κ doc2vec 0.62 0.63 4.43 0.85 SVM 0.78 0.77 8.85 0.75 LSTM 0.59 0.60 6.8 0.54 BLSTM 0.7 0.5 7.32 0.36 Two-layer LSTM 0.58 0.55 7.16 0.46 Two-layer BLSTM 0.68 0.52 7.31 0.48 word2vec + LSTM 0.68 0.77 5.39 0.76 word2vec + BLSTM 0.75 0.86 4.34 0.85 word2vec + Two-layer LSTM 0.76 0.71 6.02 0.69 word2vec + Two-layer BLSTM 0.78 0.83 4.79 0.82 word2vecpre-trained + Two-layer BLSTM 0.79 0.91 3.2 0.92 SSWE + LSTM 0.8 0.94 2.9 0.94 SSWE + BLSTM 0.8 0.92 3.21 0.95 SSWE + Two-layer LSTM 0.82 0.93 3 0.94 SSWE + Two-layer BLSTM 0.91 0.96 2.4 0.96 Table 1: Results of the different models on the Kaggle dataset. All resulting vectors were trained using linear regression. We optimized the parameters using a separate validation set (see text) and report the results on the test set. Since the search space would be massive for grid search, the best hyperparameters were determined using Bayesian Optimization (Snoek et al., 2012). In this context, the performance of our models in the validation set is modeled as a sample from a Gaussian process (GP) by constructing a probabilistic model for the error function and then exploiting this model to make decisions about where to next evaluate the function. The hyperparameters for our baselines were also determined using the same methodology. All models are trained on our training set (see Section 4), except the one prefixed ‘word2vecpre-trained’ which uses pre-trained embeddings on the Google News Corpus. We report the Spearman’s rank correlation coefficient ρ, Pearson’s product-moment correlation coefficient r, and the root mean square error (RMSE) between the predicted scores and the gold standard on our test set, which are considered more appropriate metrics for evaluating essay scoring systems (Yannakoudakis and Cummins, 2015). However, we also report Cohen’s κ with quadratic weights, which was the evaluation metric used in the Kaggle competition. Performance of the models is shown in Table 1. In terms of correlation, SVMs produce competitive results (ρ = 0.78 and r = 0.77), outperforming doc2vec, LSTM and BLSTM, as well as their deep counterparts. As described above, the SVM model has rich linguistic knowledge and consists of hand-picked features which have achieved excellent performance in similar tasks (Yannakoudakis et al., 2011). However, in terms of RMSE, it is among the lowest performing models (8.85), together with ‘BLSTM’ and ‘Twolayer BLSTM’. Deep models in combination with word2vec (i.e., ‘word2vec + Two-layer LSTM’ and ‘word2vec + Two-layer BLSTM’) and SVMs are comparable in terms of r and ρ, though not in terms of RMSE, where the former produce better results, with RMSE improving by half (4.79). doc2vec also produces competitive RMSE results (4.43), though correlation is much lower (ρ = 0.62 and r = 0.63). The two BLSTMs trained with word2vec embeddings are among the most competitive models in terms of correlation and outperform all the models, except the ones using pre-trained embeddings and SSWEs. Increasing the number of hidden layers and/or adding bi-directionality does not always improve performance, but it clearly helps in this case and performance improves compared to their uni-directional counterparts. Using pre-trained word embeddings improves the results further. More specifically, we found ‘word2vecpre-trained + Two-layer BLSTM’ to be the best configuration, increasing correlation to 0.79 ρ and 0.91 r, and reducing RMSE to 3.2. We note however that this is not an entirely 721 fair comparison as these are trained on a much larger corpus than our training set (which we use to train our models). Nevertheless, when we use our SSWEs models we are able to outperform ‘word2vecpre-trained + Two-layer BLSTM’, even though our embeddings are trained on fewer data points. More specifically, our best model (‘SSWE + Two-layer BLSTM’) improves correlation to ρ = 0.91 and r = 0.96, as well as RMSE to 2.4, giving a maximum increase of around 10% in correlation. Given the results of the pre-trained model, we believe that the performance of our best SSWE model will further improve should more training data be given to it.7 5.2 Discussion Our SSWE + LSTM approach having no prior knowledge of the grammar of the language or the domain of the text, is able to score the essays in a very human-like way, outperforming other stateof-the-art systems. Furthermore, while we tuned the models’ hyperparameters on a separate validation set, we did not perform any further preprocessing of the text other than simple tokenization. In the essay scoring literature, text length tends to be a strong predictor of the overall score. In order to investigate any possible effects of essay length, we also calculate the correlation between the gold scores and the length of the essays. We find that the correlations on the test set are relatively low (r = 0.3, ρ = 0.44), and therefore conclude that there are no such strong effects. As described above, we used Bayesian Optimization to find optimal hyperparameter configurations in fewer steps than in regular grid search. Using this approach, the optimization model showed some clear preferences for some parameters which were associated with better scoring models:8 the number of ‘noisy’ sequences E, the weighting factor α and the size of the LSTM layer DLSTM. The optimal α value was consistently set to 0.1, which shows that our SSWE approach was necessary to capture the usage of the words. Performance dropped considerably as α increased (less weight on SSWEs and more on the contextual aspect). When using α = 1, which 7Our approach outperforms all the other models in terms of Cohen’s κ too. 8For the best scoring model the hyperparameters were as follows: D = 200, H = 100, η = 1e −7, n = 9, E = 200, α = 0.1, DLST M = 10, r = 0.5. is equivalent to using the basic C&W model, we found that performance was considerably lower (e.g., correlation dropped to ρ = 0.15). The number of ‘noisy’ sequences was set to 200, which was the highest possible setting we considered, although this might be related more to the size of the corpus (see Mikolov et al. (2013) for a similar discussion) rather than to our approach. Finally, the optimal value for DLSTM was 10 (the lowest value investigated), which again may be corpus-dependent. 6 Visualizing the black box In this section, inspired by recent advances in (de-) convolutional neural networks in computer vision (Simonyan et al., 2013) and text summarization (Denil et al., 2014), we introduce a novel method of generating interpretable visualizations of the network’s performance. In the present context, this is particularly important as one advantage of the manual methods discussed in § 2 is that we are able to know on what grounds the model made its decisions and which features are most discriminative. At the outset, our goal is to assess the ‘quality’ of our word vectors. By ‘quality’ we mean the level to which a word appearing in a particular context would prove to be problematic for the network’s prediction. In order to identify ‘high’ and ‘low’ quality vectors, we perform a single pass of an essay from left to right and let the LSTM make its score prediction. Normally, we would provide the gold scores and adjust the network weights based on the error gradients. Instead, we provide the network with a pseudo-score by taking the maximum score this specific essay can take9 and provide this as the ‘gold’ score. If the word vector is of ‘high’ quality (i.e., associated with higher scoring texts), then there is going to be little adjustment to the weights in order to predict the highest score possible. Conversely, providing the minimum possible score (here 0), we can assess how ‘bad’ our word vectors are. Vectors which require minimal adjustment to reach the lowest score are considered of ‘lower’ quality. Note that since we do a complete pass over the network (without doing any weight updates), the vector quality is going to be essay dependent. 9Note the in the Kaggle dataset essays from different essay sets have different maximum scores. Here we take as ˜ymax the essay set maximum rather than the global maximum. 722 . . . way to show that Saeng is a determined . ... . . . sometimes I do . Being patience is being ... . . . which leaves the reader satisfied ... . . . is in this picture the cyclist is riding a dry and area which could mean that it is very and the looks to be going down hill there looks to be a lot of turns . ... . . . The only reason im putting this in my own way is because know one is patient in my family . ... . . .Whether they are building hand-eye coordination , researching a country , or family and friends through @CAPS3 , @CAPS2 , @CAPS6 the internet is highly and I hope you feel the same way . Table 2: Several example visualizations created by our LSTM. The full text of the essay is shown in black and the ‘quality’ of the word vectors appears in color on a range from dark red (low quality) to dark green (high quality). Concretely, using the network function f(x) as computed by Eq. (12) – (17), we can approximate the loss induced by feeding the pseudo-scores by taking the magnitude of each error vector (18) – (19). Since lim∥w∥2→0 ˆy = y, this magnitude should tell us how much an embedding needs to change in order to achieve the gold score (here pseudo-score). In the case where we provide the minimum as a pseudo-score, a ∥w∥2 value closer to zero would indicate an incorrectly used word. For the results reported here, we combine the magnitudes produced from giving the maximum and minimum pseudo-scores into a single score, computed as L(˜ymax, f(x)) −L(˜ymin, f(x)), where: L(˜y, f(x)) ≈∥w∥2 (18) w = ∇L(x) ≜∂L ∂x (˜y,f(x)) (19) where ∥w∥2 is the vector Euclidean norm w = qPN i=1 w2 i ; L(·) is the mean squared error as in Eq. (8); and ˜y is the essay pseudo-score. We show some examples of this visualization procedure in Table 2. The model is capable of providing positive feedback. Correctly placed punctuation or long-distance dependencies (as in Sentence 6 are . . . researching) are particularly favoured by the model. Conversely, the model does not deal well with proper names, but is able to cope with POS mistakes (e.g., Being patience or the internet is highly and . . . ). However, as seen in Sentence 3 the model is not perfect and returns a false negative in the case of satisfied. One potential drawback of this approach is that the gradients are calculated only after the end of the essay. This means that if a word appears multiple times within an essay, sometimes correctly and sometimes incorrectly, the model would not be able to distinguish between them. Two possible solutions to this problem are to either provide the gold score at each timestep which results into a very computationally expensive endeavour, or to feed sentences or phrases of smaller size for which the scoring would be more consistent.10 7 Conclusion In this paper, we introduced a deep neural network model capable of representing both local contextual and usage information as encapsulated by essay scoring. This model yields score-specific word embeddings used later by a recurrent neural network in order to form essay representations. We have shown that this kind of architecture is able to surpass similar state-of-the-art systems, as well as systems based on manual feature engineering which have achieved results close to the upper bound in past work. We also introduced a novel way of exploring the basis of the network’s internal scoring criteria, and showed that such models are interpretable and can be further exploited to provide useful feedback to the author. Acknowledgments The first author is supported by the Onassis Foundation. We would like to thank the three anonymous reviewers for their valuable feedback. 10We note that the same visualization technique can be used to show the ‘goodness’ of phrases/sentences. Within the phrase setting, after feeding the last word of the phrase to the network, the LSTM layer will contain the phrase embedding. Then, we can assess the ‘goodness’ of this embedding by evaluating the error gradients after predicting the highest/lowest score. 723 References Yigal Attali and Jill Burstein. 2006. Automated essay scoring with e-Rater v.2.0. Journal of Technology, Learning, and Assessment, 4(3):1–30. Ted Briscoe, John Carroll, and Rebecca Watson. 2006. The second release of the RASP system. In Proceedings of the COLING/ACL, volume 6. Ted Briscoe, Ben Medlock, and Øistein E. Andersen. 2010. Automated assessment of ESOL free text examinations. Technical Report UCAM-CL-TR790, University of Cambridge, Computer Laboratory, nov. Ciprian Chelba, Tom´aˇs Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. 2013. One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling. In arXiv preprint. YY Chen, CL Liu, TH Chang, and CH Lee. 2010. An Unsupervised Automated Essay Scoring System. IEEE Intelligent Systems, pages 61–67. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: deep neural networks with multitask learning. Proceedings of the Twenty-Fifth international conference on Machine Learning, pages 160–167, July. Ronan Collobert, Jason Weston, Leon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Mar. Scott Crossley, Laura K Allen, Erica L Snow, and Danielle S McNamara. 2015. Pssst... textual features... there is more to automatic essay scoring than just you! In Proceedings of the Fifth International Conference on Learning Analytics And Knowledge, pages 203–207. ACM. Yann N. Dauphin, Harm de Vries, and Yoshua Bengio. 2015. Equilibrated adaptive learning rates for nonconvex optimization. Feb. Misha Denil, Alban Demiraj, Nal Kalchbrenner, Phil Blunsom, and Nando de Freitas. 2014. Modelling, visualising and summarising documents with a single convolutional neural network. Jun. Semire Dikli. 2006. An overview of automated scoring of essays. Journal of Technology, Learning, and Assessment, 5(1). S. Elliot. 2003. IntellimetricTM: From here to validity. In M. D. Shermis and J. Burnstein, editors, Automated Essay Scoring: A Cross-Disciplinary Perspective, pages 71–86. Lawrence Erlbaum Associates. Noura Farra, Swapna Somasundaran, and Jill Burstein. 2015. Scoring persuasive essays using opinions and their targets. In Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 64–74. Alex Graves. 2012. Supervised Sequence Labelling with Recurrent Neural Networks. Springer Berlin Heidelberg. Michael U. Gutmann and Aapo Hyv¨arinen. 2012. Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics. J. Mach. Learn. Res., 13:307–361, February. Karl Moritz Hermann, Tom Koisk, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. Jun. S Hochreiter and J Schmidhuber. 1997. Long shortterm memory. Neural computation, 9(8):1735– 1780. Beata Beigman Klebanov and Michael Flor. 2013. Word association profiles and their use for automated scoring of essays. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1148–1158. Thomas K. Landauer, Darrell Laham, and Peter W. Foltz. 2003. Automated scoring and annotation of essays with the Intelligent Essay Assessor. In M.D. Shermis and J.C. Burstein, editors, Automated essay scoring: A cross-disciplinary perspective, pages 87– 112. Quoc V. Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. May. Honglak Lee, Roger Grosse, Rajesh Ranganath, and Andrew Y. Ng. 2009. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. Proceedings of the 26th Annual International Conference on Machine Learning ICML 09. Deryle Lonsdale and D. Strong-Krause. 2003. Automated rating of ESL essays. In Proceedings of the HLT-NAACL 2003 Workshop: Building Educational Applications Using Natural Language Processing. Danielle S McNamara, Scott A Crossley, Rod D Roscoe, Laura K Allen, and Jianmin Dai. 2015. A hierarchical classification approach to automated essay scoring. Assessing Writing, 23:35–59. Tom´aˇs Mikolov, Stefan Kombrink, Anoop Deoras, Luk´aˇs Burget, and Jan ˇCernock´y. 2011. RNNLM-Recurrent neural network language modeling toolkit. In ASRU 2011 Demo Session. Tomas Mikolov, I Sutskever, K Chen, G S Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119. Ellis B. Page. 1967. Grading essays by computer: progress report. In Proceedings of the Invitational Conference on Testing Problems, pages 87–100. 724 Ellis B. Page. 1968. The use of the computer in analyzing student essays. International Review of Education, 14(2):210–225, June. E.B. Page. 2003. Project essay grade: PEG. In M.D. Shermis and J.C. Burstein, editors, Automated essay scoring: A cross-disciplinary perspective, pages 43– 54. L.M. Rudner and Tahung Liang. 2002. Automated essay scoring using Bayes’ theorem. The Journal of Technology, Learning and Assessment, 1(2):3–21. Keisuke Sakaguchi, Michael Heilman, and Nitin Madnani. 2015. Effective feature integration for automated short answer scoring. In Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications. M Shermis and B Hammer. 2012. Contrasting stateof-the-art automated scoring of essays: analysis. Technical report, The University of Akron and Kaggle. Mark D Shermis. 2015. Contrasting state-of-the-art in the machine scoring of short-form constructed responses. Educational Assessment, 20(1):46–65. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2013. Deep inside convolutional networks: Visualising image classification models and saliency maps. 12. D.D.K. Sleator and D. Templerley. 1995. Parsing English with a link grammar. Proceedings of the 3rd International Workshop on Parsing Technologies, ACL. Jasper Snoek, Hugo Larochelle, and Ryan P. Adams. 2012. Practical bayesian optimization of machine learning algorithms. Jun. Swapna Somasundaran, Jill Burstein, and Martin Chodorow. 2014. Lexical chaining for measuring discourse coherence quality in test-taker essays. In COLING, pages 950–961. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. Sep. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. Feb. Duyu Tang. 2015. Sentiment-specific representation learning for document-level sentiment analysis. In Proceedings of the Eighth ACM International Conference on Web Search and Data Mining - WSDM '15. Association for Computing Machinery (ACM). D. M. Williamson. 2009. A framework for implementing automated scoring. Technical report, Educational Testing Service. Helen Yannakoudakis and Ronan Cummins. 2015. Evaluating the performance of automated text scoring systems. In Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications. Association for Computational Linguistics (ACL). Helen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. A new dataset and method for automatically grading ESOL texts. In The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference, 19-24 June, 2011, Portland, Oregon, USA, pages 180–189. 725
2016
68
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 726–736, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Improved Semantic Parsers For If-Then Statements I. Beltagy The University of Texas at Austin [email protected] Chris Quirk Microsoft Research [email protected] Abstract Digital personal assistants are becoming both more common and more useful. The major NLP challenge for personal assistants is machine understanding: translating natural language user commands into an executable representation. This paper focuses on understanding rules written as If-Then statements, though the techniques should be portable to other semantic parsing tasks. We view understanding as structure prediction and show improved models using both conventional techniques and neural network models. We also discuss various ways to improve generalization and reduce overfitting: synthetic training data from paraphrase, grammar combinations, feature selection and ensembles of multiple systems. An ensemble of these techniques achieves a new state of the art result with 8% accuracy improvement. 1 Introduction The ability to instruct computers using natural language clearly allows novice users to better use modern information technology. Work in semantic parsing has explored mapping natural language to some formal domain-specific programming languages such as database queries (Woods, 1977; Zelle and Mooney, 1996; Berant et al., 2013; Andreas et al., 2016; Yin et al., 2016), commands to robots (Kate et al., 2005), operating systems (Branavan et al., 2009), and spreadsheets (Gulwani and Marron, 2014). This paper explores the use of neural network models (NN) and conventional models for semantic parsing. Recently approaches using neural networks have shown great improvements in a number of areas such as parsing (Vinyals et al., 2015), machine translation (Devlin et al., 2014), and image captioning (Karpathy and Fei-Fei, 2015). We are among the first to apply neural network methods to semantic parsing tasks (Grefenstette et al., 2014; Dong and Lapata, 2016). There are several benchmark datasets for semantic parsing, the most well known of which is Geoquery (Zelle and Mooney, 1996). We target an If-Then dataset (Quirk et al., 2015) for several reasons. First, it is both directly applicable to the end-user task of training personal digital assistants. Second, the training data, drawn from the site http://ifttt.com, is comparatively quite large, containing nearly 100,000 recipe-description pairs. That said, it is several orders of magnitude smaller than the data for other tasks where neural networks have been successful. Machine translation datasets, for instance, may contain billions of tokens. NN methods appear “data-hungry”. They require larger datasets to outperform sparse linear approaches with careful feature engineering, as evidenced in work on syntactic parsing (Vinyals et al., 2015). This makes it interesting to compare NN models with conventional models on this dataset. As in most prior semantic parsing attempts, we model natural language understanding as a structure prediction problem. Each modeling decision predicts some small component of the target structure, conditioned on the whole input and all prior decisions. Because this is a real-world task, the vocabulary is large and varied, with many words appearing only rarely. Overfitting is a clear danger. We explore several methods to improve generalization. A classic method is to apply feature selection. Synthetic data generated by paraphrasing helps augment the data available. Adjusting the conditional structure of our model also makes sense, as does creating ensembles of the best performing approaches. 726 An ensemble of the resulting systems achieves a new state-of-the-art result, with an absolute improvement of 8% in accuracy. We compare the performance of a neural network model with logistic regression, and explore in detail the contribution of each of them, and why the logistic regression is performing better than the neural network. 2 Related Work 2.1 Semantic Parsing Semantic parsing is the task of translating natural language to a meaning representation language that the machine can execute. Various semantic parsing tasks have been proposed before, including querying a database (Zelle and Mooney, 1996), following navigation instructions (Chen, 2012), translating to Abstract Meaning Representation (AMR) (Artzi et al., 2015), as well as the If-Then task we explore. Meaning representation languages vary with the task. In database queries, the meaning representation language is either the native query language (e.g. SQL or Prolog), or some alternative that can be deterministically transformed into the native query language. To follow navigation instructions, the meaning representation language is comprised of sequences of valid actions: turn left, turn right, move forward, etc. For parsing If-Then rules, the meaning representation is an abstract syntax tree (AST) in a very simple language. Each root node expands into a “trigger” and “action” pair. These nodes in turn expand into a set of supported triggers and actions. We model these trees as an (almost) context free grammar1 that generates valid If-Then tasks. A number of semantic parsing approaches have been proposed, but most fit into the following broad divisions. First, approaches driven by Combinatory Categorical Grammar (CCG) have proven successful at several semantic parsing tasks. This approach is attractive in that it simultaneously provides syntactic and semantic parses of a natural language utterance. Syntactic structure helps constrain and guide semantic interpretation. CCG relies heavily on a lexicon that specifies both the syntactic category and formal se1Information at the leaves of the action may use parameters drawn from the trigger. For instance, consider a rule that says “text me the daily weather report.” The trigger is a new weather report, and the action is to send an SMS. The contents of that SMS are generated by the trigger, which is no longer context free. mantics of each lexical item in the language. In many instantiations, the lexicon is learned from the training data (Zettlemoyer and Collins, 2005) and grounds directly in the meaning representation. Another approach is to view the semantic parsing task as a machine translation task, where the source language is natural language commands and the target language is the meaning representation. Several approaches have applied standard machine translation techniques to semantic parsing (Wong and Mooney, 2006; Andreas et al., 2013; Ratnaparkhi, 1999) with successful results. More recently, neural network approaches have been developed for semantic parsing, and especially for querying a database. A neural network is trained to translate the query and the database into some continuous representation then use it to answer the query (Andreas et al., 2016; Yin et al., 2016). 2.2 If-Then dataset We use a semantic parsing dataset collected from http://ifttt.com, first introduced in Quirk et al. (2015). This website publishes a large set of recipes in the form of If-Then rules. Each recipe was authored by a website user to automate simple tasks. For instance, a recipe could send you a message every time you are tagged on a picture on Facebook. From a natural language standpoint, the most interesting part of this data is that alongside each recipe, there is a short natural language description intended to name or advertise the task. This provides a naturalistic albeit often noisy source of parallel data for training semantic parsing systems. Some of these descriptions faithfully represent the program. Others are underspecified or suggestive, with many details of the recipe are not uniquely specified or omitted altogether. The task is to predict the correct If-Then code given a natural language description. As for the code, If-Then statements follow the format I f TriggerChannel . T r i g g e r F u n c t i o n ( args ) Then ActionChannel . ActionFunction ( args ) Every If-Then statement has exactly one trigger and one action. Each trigger and action consist of both a channel and a function. The channel represents a connection to a service, website, or device 727 (e.g., Facebook, Android, or ESPN) and provides a set of functions relevant to that channel. Finally, each of these functions may take a number of arguments: to receive a trigger when it becomes sunny, we need to specify the location to watch. The resulting dataset after cleaning and separation contains 77,495 training recipes, 5,171 development recipes and 4,294 testing recipes. 2.3 Semantic parsing for If-Then rules Both CCG and MT-inspired approaches assume a fairly strong correspondence between the words in the natural language request and the concepts in the meaning representation. That is, most words in the description should correspond to some concept in the code, and most concepts in the code should correspond to some word in the description. However, prior work on this dataset (Quirk et al., 2015) found that this strong correspondence is often missing. The descriptions may mention only the most crucial or interesting concepts; the remainder of the meaning representation must be inferred from context. The best performing methods focused primarily on generating well-formed meaning representations, conditioning their decisions on the source language. Quirk et al. (2015) proposed two models that rely on a grammar to generate all valid ASTs. The first model learns a simple classifier for each production in the grammar, treating the sentence as a bag of features. No alignment between the language and meaning representation is assumed. The second method attempts to learn a correspondence between the language and the code, jointly learning to select the correct productions in the meaning representation grammar. Although the latter approach is more appealing from a modeling standpoint, empirically it doesn’t perform substantially better than the alignment-free model. Furthermore the alignment-free model is much simpler to implement and optimize. Therefore, we build upon the alignment-free approach. 2.4 Neural Networks Neural network approaches have recently made great strides in several natural language processing tasks, including machine translation and dependency parsing. Partially these gains are due to better generalization ability. Until recently, the NLP community leaned heavily on featurerich approaches that allow models to learn complex relationships from data. However, imporIF TRIGGER ACTION Instagram AnyNewPhotoByYou Dropbox AddFileFromURL Figure 1: Derivation tree of If-Then statement of the recipe Autosave your Instagram photos to Dropbox. Arguments of the functions AnyNewPhotoByYou and AddFileFromURL are ignored. tant features, such as indicator features for words and phrases, were often very sparse. Furthermore, the best systems often relied on manually-induced feature combinations (Bohnet, 2010). Multi-layer neural networks have several advantages. Words (or, more generally, features) are first embedded into a continuous space where similar features land in nearby locations; this helps lead to lexical generalization. The additional hidden layers can model feature interactions in complex ways, obviating the need for manual feature template induction. Feed-forward neural networks with relatively simple structure have shown great gains in both dependency parsing (Chen and Manning, 2014) and machine translation (Devlin et al., 2014) without the need for complex feature templates and large models. Our NN models here are inspired by these effective approaches. 3 Approach We next describe the details of how If-Then recipes are constructed given natural language descriptions. As in prior work, we treat semantic parsing as a structure prediction task. First we describe the structure and features of the model, then expand on the details of inference. 3.1 Grammar Along the lines of Quirk et al. (2015), we build a context-free grammar baseline. This grammar generates only well-formed meaning representations. In the case of this dataset, meaning representations always consist of a root production with two children: a trigger and an action. Both trigger and action first generate a channel, then a function matching that action. Optionally we may also generate the arguments of these functions; we do not 728 evaluate these selections as they are often idiosyncratic and specific to the user. For example, the recipe Autosave your Instagram photos to Dropbox has the following meaning representation: IF Instagram . AnyNewPhotoByYou THEN Dropbox . AddFileFromURL ( FileURL={ SourceUrl } , FileName={ Caption } , DropboxFolderPath=IFTTT / Instagram ) If we ignore the function arguments, the resulting meaning representation is: IF Instagram . AnyNewPhotoByYou THEN Dropbox . AddFileFromURL This examples shows also that most of the function arguments are not crucial for the representation of the If-Then statement.2 The grammar we use has productions corresponding to every channel and every function. Figure 1 shows an example derivation tree D. This grammar consists of 892 productions: 128 trigger channels, 487 trigger functions, 99 action channels and 178 action functions.3 3.2 Model Our goal is to learn a model of derivation trees D given a natural sentences S. To predict the derivation for a sentence, we seek the derivation D with maximum probability given the sentence P(D|S). For the purposes of modeling, we prefer to work with sequences rather than trees. Given a derivation tree D, we transform it into a sequence of productions R(D) = r1, . . . , rn by a top-down, leftto-right tree traversal: r1 is the top-most production, and rn is the bottom right production. The sentence S is represented as a set of features f(S). The derivation score P(D|S) is a function of the productions of D and those features f(S): P(D|S) = Y ri∈R(D) P(ri|r1, . . . , ri−1, f(S)) (1) The score of a derivation tree given the sentence is the product of probabilities of its productions. 2Arguments are still important for a few If-Then recipes. For instance, in If there is snow tomorrow send a notification, “snow” is an argument to the function Tomorrow’sForecastCallsFor. We are not handling such cases in this work. 3For this task, it is possible to model the programs as a 4-tuple, but using the grammar approach allows us to port the same technique to other semantic parsing tasks. ri−1 ri−2 ri−3 S ri Hidden layer(s) Input layer Output layer Figure 2: Architecture of the feed-forward neural networks used in this paper. When predicting rule ri, the prior rules and the whole sentence are used as input. Separate parameters are learned for each position i. The probability of selecting production ri given the sentence S is dependent on the features of the sentence as well as the previous productions r1, . . . , ri−1; namely, all those productions that are above and to the left of the current production. Conditioning on previous productions helps predicting the next one because it captures the conditional dependencies between the productions of the derivation tree, an improvement over prior work (Quirk et al., 2015). In particular, we can model which combinations of triggers and actions are more compatible, both function and channel. 3.3 Training To learn the derivation score P(D|S), we need to learn probability of productions P(ri|r1, . . . , ri−1, f(S)). We learn this probability using a multiclass classifier where the output classes are the possible productions in the grammar. The classifier is trained to predict the next production given previous productions and the sentence features. Each sentence S is represented with a sparse feature vector f(S). We used a simple set of features: word unigrams and bigrams, character trigrams, and Brown clusters (Liang, 2005). Each sentence is represented as a large sparse k-hot vector, where k is the number of features representing S, |f(S)|. We use a simple one-hot representation of prior rules. For training, we explored two approaches: a standard logistic regression classifier, and a feed 729 forward neural network classifier. 4 As for network structure, we evaluated models with either one or two 200-dimensional hidden layers (with sigmoid activation function) followed by a softmax output layer to produce a probability for each production. We tried more than two hidden layers and larger hidden layer size, but the results were similar or worse likely because training becomes more difficult. Figure 2 shows the architecture of the network we use. For training, we used a variant of stochastic gradient descent called RMSprop (Dauphin et al., 2015) that adjusts the learning rate for each parameter adaptively, along with a global learning rate of 1−3. The minibatch size was 100, with dropout regularization for hidden layers at 0.5 along with an L2 regularizer with weight 0.005. Each of these parameters were tuned on the validation set, though we found learning to be robust to minor variations in these parameters. All of the neural networks were implemented with Theanets (Johnson, 2015). Note that history features r1, . . . , ri−1 in classifier training are always correct. The model is akin to a MEMM, rather than a CRF. We make this simplifying assumption for tractability, like many neural network approaches (Devlin et al., 2014). 3.4 Inference When, at test time, we are given a new sentence, we would like to infer its most probable derivation tree D. Classifiers trained as in the prior section give probability distributions over productions given the sentence and all prior productions P(ri|r1, . . . , ri−1, f(S)). Were the distribution to be context free, we could rely on algorithms similar to Earley parsing (Earley, 1970) to find the max derivation. However, the dependency on prior productions breaks the context free assumption. Therefore, we resort to approximate inference, namely beam search. Each partial hypothesis is grouped into a beam based on the number of productions it contains; we use a beam width of 8, and search for the highest scoring hypothesis. 4 Improving generalization The data set we use for training and testing is primarily English but contains a broad vocabulary as 4We tried the sequence-to-sequence model with LSTMs (Sutskever et al., 2014) to map word sequence to the derivation tree productions, but the results were always lower than the feed forward network. This is probably because of the lack of enough training data. well as many sentences from other languages such as Chinese, Arabic, and Russian. Thus, a seemingly large dataset of nearly eighty thousand examples is likely to suffer from overfitting. In this section, we discuss a few attempts to improve generalization in the sparse data setting. 4.1 Synthetic data using paraphrases Arguably the best, though most expensive, way to reduce overfitting is to collect more training data. In our case, the training data available is limited and difficult to create. We propose to augment the training data in an automatic though potentially noisy way by generating synthetic training pairs. The main idea is that two semantically equivalent sentences should have the same meaning representation. Given an existing training pair, replacing the pair’s linguistic description with a paraphrase leads to a new synthetic training pair. For example, a recipe like Autosave your Instagram photos to Dropbox can be paraphrased to Autosave your Instagram pictures to Dropbox while retaining the meaning representation: IF Instagram . AnyNewPhotoByYou THEN Dropbox . AddFileFromURL . We first explore paraphrases using WordNet synonyms. Every word in the sentence can be replaced by one of its synonyms that is picked randomly (a word is a synonym of itself). For words with multiple senses, we group all synonyms of all senses, then retain only those synonyms already in the vocabulary of the training data. This has two advantages. First, we do not increase the vocabulary size and therefore avoid overfitting. Second, this acts as a simple form of word sense disambiguation. This adds around 50,000 additional training examples. Next, we consider augmenting the data using the Paraphrase Database (Ganitkevitch et al., 2013). Each original description is converted into a lattice. The original word at each position is left in place with a constant score. For each word or phrase in the description found PPDB, we add one arc for each paraphrase, parameterized by the PPDB score of that phrase. The resulting lattice represents many possible paraphrases of the input. We select at most 10 diverse paths through this lattice using the method of Gimpel et al. (2013).5 This adds around 470,000 training examples. 5We use a trigram language model, and a weight of 4. 730 IF TRIGGER ACTION Instagram.AnyNewPhotoByYou Dropbox.AddFileFromURL Figure 3: Derivation tree of IFTTT statement of the recipe Autosave your Instagram photos to Dropbox using the second grammar. 4.2 Alternative grammar formulation We rely on a grammar to generate all valid meaning representations and learn models over the productions of this grammar. Different factorizations of the grammar lead to different model distributions. Our primary grammar is described in Section 3.1. A second, alternate grammar formulation has fewer levels but more productions: it combines the channel and function into a single production, in both the trigger and the action. Figure 3 shows an example derivation tree using this grammar. The size of this grammar is 780 productions (552 triggers + 228 actions). An advantage of this grammar is that it cannot assign probability mass to invalid ASTs, where the function is not applicaable to the channel. On the other hand, this grammar likely does not generalize as well as the first grammar. The first grammar effectively has much more data about each channel, which likely improves accuracy. Function predictions can condition on hopefully accurate channel predictions. It can also benefit from the fact that some function names are shared among channels. From that perspective, the second grammar has fewer training instances for each outcome. 4.3 Feature selection The training set contains approximately 77K training examples, yet the number of distinct features types (word unigrams and bigrams, character trigrams, Brown clusters) is approximately 230K. Only 80K features occur in the training set more than once. This ratio suggests overfitting may be a major issue.Feature selection likely can improve these issues. We used only simple count cutoffs, including only features that occur in the training set more than once and more than twice. Including features that occur more than once led to improvements in practice. 4.4 Ensemble Finally, we explore improving generalization by building ensembles of multiple systems. Even if systems overfit, they likely overfit in different ways. When systems agree, they are likely to agree on the correct answer. Combining their results will suffer less from overfitting. We use simple majority voting as an ensemble strategy, resolving ties in an arbitrary but deterministic way. 5 Evaluation We evaluate the performance of the systems by providing the model with descriptions unseen during training. Free parameters of the models were tuned using the development set. The separation of data into training, development, and test follows Quirk et al. (2015). Two evaluation metrics are used: accuracy on just channel selection and accuracy of both channel and function. Two major families of approaches are considered: a baseline logistic regression classifier from scikit-learn (Pedregosa et al., 2011), as well as a feed-forward neural network. We explore a number of variations, including feature selection and grammar formulation. 5.1 Comparison systems Our default system was described in section 3, not including improvements from section 4 unless otherwise noted. The grammar uses the primary formulation from section 3.1. Neural network models use a single hidden layer by default; we also explore two hidden layers. We evaluate two approaches for generating synthetic data. The first approach, leaning primarily on WordNet to generate up to one paraphrase for each instance, is labeled WN. The second approach using Paraphrase Database to generate up to ten paraphrases is labeled PPDB. The Alternate grammar line uses the section 4.2 grammar, and otherwise default configurations (no synthetic data, single hidden layer for NN). Feature selection again uses the default configuration, but uses only those features that occurred more than once in the training data. Finally we explore ensembles of all approaches. First, we combine all variations within the same model family; next, we bring all systems together. To evaluate the impact of individual systems, we also present results with specific systems removed. 731 System Channel accuracy Full tree accuracy NN LR NN LR Quirk et al. (2015) w/o alignment 46.30 33.00 Quirk et al. (2015) with alignment 47.40 34.50 Default configurations 52.93 53.73 39.66 41.87 Two hidden layers 46.81 32.77 No hidden layers 50.05 38.47 Synthetic data (WN) 52.45 53.68 38.64 41.55 Synthetic data (PPDB) 51.86 52.96 38.86 40.63 Alternate grammar 50.09 52.42 39.10 41.15 Feature selection 52.91 53.31 39.29 41.34 Ensemble of systems above 53.98 53.73 41.06 41.85 Ensemble NN + LR 54.31 42.55 Ensemble NN + LR (w/o alternate grammar) 54.38 41.90 Ensemble NN + LR (w/o synthetic data) 53.98 42.41 Table 1: Accuracy of the Neural Network (NN) and Logistic Regression (LR) implementations of our system with various configurations. Channel-only and full tree (channel+function) accuracies are listed. 5.2 Results Table 1 shows the accuracy of each evaluated system, and Table 2 explores system performance on important subsets of the data. The first columns present accuracy of just the channel, and the last columns present the channel and the function together (the full derivation). We achieve new stateof-the-art results, showing a 7% absolute improvement on the channel-only accuracy and 8% absolute improvement on the full derivation tree in the most difficult condition. 5.3 Discussion Partly these improved results are driven by better features. Adding more robust representations of the input (e.g. Brown clusters) and conditioning on prior structure of the tree leads to more consistent and coherent trees. One key observation is that the logistic regression classifier consistently outperforms the neural network, though by a small margin. We suspect two main causes: optimization difficulties and training size. To compare the optimization algorithms, Table 1 shows the result of a neural network with no hidden layers, which is effectively identical to a logistic regression model. Stochastic gradient descent used to train the neural network did not perform as well as the LIBLINEAR (Fan et al., 2008) solver used to train the logistic regression, because the loss function was not optimized as well. Optimization problems are even more likely with hidden layers, since the objective is no longer convex. Second, the training data is small by neural network standards. Prior attempts to use neural networks for parsing required larger amounts of training data to exceed the state-of-the-art. Non-linear models are able to capture regularities that linear models cannot, but may require more training data to do so. Table 1 shows that a network with a single hidden layer outperforms a one with two hidden layers. This additional hidden layer seems to make learning harder (even with layer-wise pretraining). We also ran an additional experiment, limiting both NN and LR to use word unigram features, and varying the vocabulary size by frequency thresholding; the results are in table 3. LR models were more effective when all features were present, likely due to their convex objective and simple regularization. NN models, on the other hand, actually outperform LR models when limited to more common vocabulary items. Given more data, NN could likely find representations that outperformed manual feature engineering. Although we only considered feed-forward nerual networks, results on recurrent architectures Dong and Lapata (2016) are in accordance with our findings. Their LSTM-based approach does not achieve great gains on this data set because: “user curated descriptions are often of low quality, and thus align very loosely to their corresponding ASTs”. Even though this training set is larger than other semantic parsing datasets, the vocabulary, sentence structures, and even languages here are much more diverse, which make it difficult for the NN to learn useful representations. Dong and Lapata (2016) tried to reduce the impact of this problem by evaluating only on the English sub732 Channel Full tree All: 4,294 recipes posclass 47.4 34.5 D&L — — NN 52.9 39.7 LR 53.7 41.9 Ensemble 54.3 42.6 oracleturk 48.8 37.8 Omit non-English: 3,744 recipes posclass 50.0 36.9 D&L 54.3 39.2 NN 55.1 41.2 LR 56.0 44.3 Ensemble 56.8 44.5 oracleturk 56.0 43.5 Omit non-English, unintelligible: 2,433 recipes posclass 67.2 50.4 D&L 68.8 50.5 NN 71.3 53.7 LR 71.9 56.6 Ensemble 72.7 57.1 oracleturk 86.2 59.4 ≥3 agree with gold: 760 recipes posclass 81.4 71.0 D&L 87.8 75.2 NN 88.0 74.3 LR 88.8 82.5 Ensemble 89.1 82.2 oracleturk 100.0 100.0 Table 2: System comparisons on various subsets of the data. Following Quirk et al. (2015), we also evaluation on illustrative subsets. “posclass” represents the best system from prior work. D&L is the best-performing system from Dong and Lapata (2016). NN and LR are the single best neural network, logistic regression models, and Ensemble is the combination of all systems. “oracleturk” represents cases where at least one turker agreed with the gold standard. set of the data. Interestingly, our carefully built feed-forward networks outperform their approach in almost every subset. Although the neural network with one hidden layer does not outperform logistic regression in a feature rich setting, it makes substantially different predictions. An ensemble of their outputs achieves better accuracy than either system individually. Our techniques for improving generalization do not improve individual systems. Yet when all techniques are combined in an ensemble, the resulting predictions are better. Furthermore, an ensemble without the synthetic data or without the alternate grammar has lower accuracy: each technique contributes to the final result. System Full tree accuracy NN LR All words 35.79 37.03 Count ≥2 37.01 36.91 Count ≥3 37.07 36.59 Table 3: Accuracy of NN and LR limited to word unigram features, with three vocabulary sizes: all words, words occurring at least twice in the training data (13,971 words), and those occurring at least three times in the training data (8,974 words). 5.4 Comparison of logistic regression and neural network approaches We performed a detailed exploration of the cases where either the LR model was correct and the NN model was wrong, or vice versa. Table 4 breaks these errors into a number of cases: • Swapped trigger and action. Here the system misinterpreted a rule, swapping the trigger for the action. An example NN swap was “Backup Pinboard entries to diigo”; an example LR swap was “Like a photo on tumblr and upload it to your flickr photostream .” • Duplicated. In this case, the system used the same channel for both trigger and action, despite clear evidence in the language. For instance, the LR model incorrectly used Facebook as both the trigger and channel in this recipe: “New photo on Facebook addec to my Pryv”. The NN model correctly identified Pryv as the target channel, despite the typo in the recipe. • Missed word cue. In many cases there was a clear “cue word” in the language that should have forced a correct channel, but the model picked the wrong one. For instance, in “tweet # stared youtube video”, the trigger should be starred YouTube videos, but the NN model incorrectly selected feeds. • Missed multi-word cue. Sometimes the cue was a multi-word phrase, such as “One Drive”. The NN model tended to miss these cues. • Missed inference. In certain cases the cue was more of a loose inference. Words such as “payment” and “refund” should tend to 733 NN LR Error type errors errors Swapped trigger and action 4 4 Duplicated 3 4 Missed word cue 8 8 Missed multi-word cue 2 0 Missed inference 8 0 Related channel 5 8 Grand Total 30 24 Table 4: Count of error cases by type for NN and LR models, in their default configurations. This table only counts those instances in the most clean set (where three or more turkers agree with the gold program) where exactly one system made an error. refer to triggers from the Square payment provider; the NN seemed to struggle on these cases. • Related channel. Often the true channel is very difficult to pick: should the system use iOS location or Android location? NN models seemed to do better on these cases, perhaps picking up on some latent cues in the data that were not immediately evident to the authors. In general, a slightly more powerful NN model with access to more relevant data might overcome some of the issues above. We also explored correlations with errors and a number of other criteria, such as text length and frequency of the channels and functions, but found no substantial differences. In general, the remaining errors are often plausible given the noisy input. 6 Future Work We have achieved a new state-of-the-art on this dataset, though derivation tree accuracy remains low, around 42%. While some errors are caused by training data noise and others are due to noisy test instances, there is still room for improvement. We believe synthetic data is a promising direction. Initial attempts show small improvements; better results may be within reach given more tuning. This may enable gains with recurrent architectures (e.g., LSTMs). The networks here rely primarily on word-based features. Character-based models have resulted in improved syntactic parsing results (Ballesteros et al., 2015). We believe that noisy data such as the If-Then corpus would benefit from character modelings, since the models could be more robust to spelling errors and variations. Another important future work direction is to model the arguments of the If-Then statements. However, that requires segmenting the arguments into those that are general across all users, and those that are specific to the recipe’s author. Likely this would require further annotation of the data. 7 Conclusion In this paper, we address a semantic parsing task, namely translating sentences to If-Then statements. We model the task as structure prediction, and show improved models using both neural networks and logistic regression. We also discussed various ways to improve generalization and reduce overfitting, including adding synthetic training data by paraphrasing sentences, using multiple grammars, applying feature selection and ensembling multiple systems. We achieve a new state-ofthe-art with 8% absolute accuracy improvement. References Jacob Andreas, Andreas Vlachos, and Stephen Clark. 2013. Semantic parsing as machine translation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 47–52, Sofia, Bulgaria, August. Association for Computational Linguistics. Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016. Learning to compose neural networks for question answering. In NAACL 2016. Yoav Artzi, Kenton Lee, and Luke Zettlemoyer. 2015. Broad-coverage CCG semantic parsing with AMR. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1699–1710, Lisbon, Portugal, September. Association for Computational Linguistics. Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2015. Improved transition-based parsing by modeling characters instead of words with LSTMs. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 349–359, Lisbon, Portugal, September. Association for Computational Linguistics. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of Conference on Empirical Methods in Natural Language Processing (EMNLP-13). 734 Bernd Bohnet. 2010. Top accuracy and fast dependency parsing is not a contradiction. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 89–97, Beijing, China, August. Coling 2010 Organizing Committee. S.R.K. Branavan, Harr Chen, Luke S. Zettlemoyer, and Regina Barzilay. 2009. Reinforcement learning for mapping instructions to actions. In Joint Conference of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing (ACL-IJCNLP), Singapore. Danqi Chen and Christopher D Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of Conference on Empirical Methods in Natural Language Processing (EMNLP14). David L Chen. 2012. Fast online lexicon learning for grounded language acquisition. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pages 430–439. Association for Computational Linguistics. Yann N Dauphin, Harm de Vries, Junyoung Chung, and Yoshua Bengio. 2015. RMSProp and equilibrated adaptive learning rates for non-convex optimization. arXiv preprint arXiv:1502.04390. Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard Schwartz, and John Makhoul. 2014. Fast and robust neural network joint models for statistical machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1370–1380, Baltimore, Maryland, June. Association for Computational Linguistics. Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In arXiv:1601.01280. Jay Earley. 1970. An efficient context-free parsing algorithm. Commun. ACM, 13(2):94–102, February. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, XiangRui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A library for large linear classification. Journal of Machine Learning Research, 9:1871–1874. Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The paraphrase database. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 758–764, Atlanta, Georgia, June. Association for Computational Linguistics. Kevin Gimpel, Dhruv Batra, Chris Dyer, and Gregory Shakhnarovich. 2013. A systematic exploration of diversity in machine translation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1100–1111, Seattle, Washington, USA, October. Association for Computational Linguistics. Edward Grefenstette, Phil Blunsom, Nando de Freitas, and Karl Moritz Hermann. 2014. A deep architecture for semantic parsing. In Proceedings of the ACL 2014 Workshop on Semantic Parsing, pages 22–27, Baltimore, MD, June. Association for Computational Linguistics. Sumit Gulwani and Mark Marron. 2014. Nlyze: Interactive programming by natural language for spreadsheet data analysis and manipulation. In SIGMOD. Leif Johnson. 2015. Theanets. https://github. com/lmjohns3/theanets. Andrej Karpathy and Li Fei-Fei. 2015. Deep visualsemantic alignments for generating image descriptions. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June. R. J. Kate, Y. W. Wong, and R. J. Mooney. 2005. Learning to transform natural to formal languages. In Proceedings of the Twentieth National Conference on Artificial Intelligence (AAAI-05), pages 1062–1068, Pittsburgh, PA, July. Percy Liang. 2005. Semi-supervised learning for natural language. Ph.D. thesis, Massachusetts Institute of Technology. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830. Chris Quirk, Raymond Mooney, and Michel Galley. 2015. Language to code: Learning semantic parsers for if-this-then-that recipes. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. Adwait Ratnaparkhi. 1999. Learning to parse natural language with maximum entropy models. Machine learning, 34(1-3):151–175. Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pages 3104–3112. Oriol Vinyals, Łukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In C. Cortes, N.D. Lawrence, D.D. Lee, M. Sugiyama, R. Garnett, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2755–2763. Curran Associates, Inc. 735 Yuk Wah Wong and Raymond Mooney. 2006. Learning for semantic parsing with statistical machine translation. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 439–446, New York City, USA, June. Association for Computational Linguistics. William A. Woods. 1977. Lunar rocks in natural English: Explorations in natural language question answering. In Antonio Zampoli, editor, Linguistic Structures Processing. Elsevier North-Holland, New York. Pengcheng Yin, Zhengdong Lu, and Ben Kao Hang Li. 2016. Neural enquirer: Learning to query tables with natural language. In ICLR 2016. John M. Zelle and Raymond J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proceedings of the Thirteenth National Conference on Artificial Intelligence (AAAI96), pages 1050–1055, Portland, OR, August. Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In In Proceedings of the 21st Conference on Uncertainty in AI, pages 658–666. 736
2016
69
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 66–75, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Models and Inference for Prefix-Constrained Machine Translation Joern Wuebker, Spence Green, John DeNero, Saša Hasan Lilt, Inc. [email protected] Minh-Thang Luong Stanford University [email protected] Abstract We apply phrase-based and neural models to a core task in interactive machine translation: suggesting how to complete a partial translation. For the phrase-based system, we demonstrate improvements in suggestion quality using novel objective functions, learning techniques, and inference algorithms tailored to this task. Our contributions include new tunable metrics, an improved beam search strategy, an n-best extraction method that increases suggestion diversity, and a tuning procedure for a hierarchical joint model of alignment and translation. The combination of these techniques improves next-word suggestion accuracy dramatically from 28.5% to 41.2% in a large-scale English-German experiment. Our recurrent neural translation system increases accuracy yet further to 53.0%, but inference is two orders of magnitude slower. Manual error analysis shows the strengths and weaknesses of both approaches. 1 Introduction A core prediction task in interactive machine translation (MT) is to complete a partial translation (Ortiz-Martínez et al., 2009; Koehn et al., 2014). Sentence completion enables interfaces that are richer than basic post-editing of MT output. For example, the translator can receive updated suggestions after each word typed (Langlais et al., 2000). However, we show that completing partial translations by naïve constrained decoding—the standard in prior work—yields poor suggestion quality. We describe new phrase-based objective functions, learning techniques, and inference algorithms for the sentence completion task.1 We then compare this improved phrase-based system to a state-of-theart recurrent neural translation system in large-scale English-German experiments. A system for completing partial translations takes as input a source sentence and a prefix of the target sentence. It predicts a suffix: a sequence of tokens that extends the prefix to form a full sentence. In an interactive setting, the first words of the suffix are critical; these words are the focus of the user’s attention and can typically be appended to the translation with a single keystroke. We introduce a tuning metric that scores correctness of the whole suffix, but is particularly sensitive to these first words. Phrase-based inference for this task involves aligning the prefix to the source, then generating the suffix by translating the unaligned words. We describe a beam search strategy and a hierarchical joint model of alignment and translation that together improve suggestions dramatically. For English-German news, next-word accuracy increases from 28.5% to 41.2%. An interactive MT system could also display multiple suggestions to the user. We describe an algorithm for efficiently finding the n-best next words directly following a prefix and their corresponding best suffixes. Our experiments show that this approach to n-best list extraction, combined with our other improvements, increased next-word suggestion accuracy of 10-best lists from 33.4% to 55.5%. We also train a recurrent neural translation system to maximize the conditional likelihood of the next word following a translation prefix, which is both a standard training objective in neural translation and an ideal fit for our task. This neural system provides even more accurate predictions than our improved phrase-based system. However, inference is two orders of magnitude slower, which is prob1Code available at: https://github.com/stanfordnlp/phrasal 66 lematic for an interactive setting. We conclude with a manual error analysis that reveals the strengths and weaknesses of both the phrase-based and neural approaches to suffix prediction. 2 Evaluating Suffix Prediction Let F and E denote the set of all source and target language strings, respectively. Given a source sentence f ∈F and target prefix ep ∈E, a predicted suffix es ∈E can be evaluated by comparing the full sentence e = epes to a reference e∗. Let e∗ s denote the suffix of the reference that follows ep. We define three metrics below that score translations by the characteristics that are most relevant in an interactive setting: the accuracy of the first words of the suffix and the overall quality of the suffix. Each metric takes example triples (f, ep, e∗) produced during an interactive MT session in which ep was generated in the process of constructing e∗. A simulated corpus of examples can be produced from a parallel corpus of (f, e∗) pairs by selecting prefixes of each e∗. An exhaustive simulation selects all possible prefixes, while a sampled simulation selects only k prefixes uniformly at random for each e∗. Computing metrics for exhaustive simulations is expensive because it requires performing suffix prediction inference for every prefix: |e∗| times for each reference. Word Prediction Accuracy (WPA) or nextword accuracy (Koehn et al., 2014) is 1 if the first word of the predicted suffix es is also the first word of reference suffix e∗ s, and 0 otherwise. Averaging over examples gives the frequency that the word following the prefix was predicted correctly. In a sampled simulation, all reference words that follow the first word of a sampled suffix are ignored by the metric, so most reference information is unused. Number of Predicted Words (#prd) is the maximum number of contiguous words at the start of the predicted suffix that match the reference. Like WPA, this metric is 0 if the first word of es is not also the first word of e∗ s. In a sampled simulation, all reference words that follow the first mis-predicted word in the sampled suffix are ignored. While it is possible that the metric will require the full reference suffix, most reference information is unused in practice. Prefix-Bleu (pxBleu): Bleu (Papineni et al., 2002) is computed from the geometric mean of clipped n-gram precisions precn(·, ·) and a brevity penalty BP(·, ·). Given a sequence of references E∗= e∗ 1, . . . , e∗ t and corresponding predictions E = e1, . . . , et, Bleu(E, E∗) = BP(E, E∗)· 4 Y n=1 precn(E, E∗) 1 4 Ortiz-Martínez et al. (2010) use BLEU directly for training an interactive system, but we propose a variant that only scores the predicted suffix and not the input prefix. The pxBleu metric computes Bleu( ˆE, ˆE∗) for the following constructed sequences ˆE and ˆE∗: • For each (f, ep, e∗) and suffix prediction es, ˆE includes the full sentence e = epes. • For each (f, ep, e∗), ˆE∗is a masked copy of e∗in which all prefix words that do not match any word in e are replaced by null tokens. This construction maintains the original computation of the brevity penalty, but does not include the prefix in the precision calculations. Unlike the two previous metrics, the pxBleu metric uses all available reference information. In order to account for boundary conditions, the reference e∗is masked by the prefix ep as follows: we replace each of the first |ep −3| words with a null token enull, unless the word also appears in the suffix e∗ s. Masking retains the last three words of the prefix so that the first words after the prefix can contribute to the precision of all n-grams that overlap with the prefix, up to n = 4. Words that also appear in the suffix are retained so that their correct prediction in the suffix can contribute to those precisions, which would otherwise be clipped. 2.1 Loss Functions for Learning All of these metrics can be used as the tuning objective of a phrase-based machine translation system. Tuning toward a sampled simulation that includes one or two prefixes per reference is much faster than using an exhaustive set of prefixes. A linear combination of these metrics can be used to trade offthe relative importance of the full suffix and the words immediately following the prefix. With a combined metric, learning can focus on these words while using all available information in the references. 2.2 Keystroke Ratio (KSR) In addition to these metrics, suffix prediction can be evaluated by the widely used keystroke ratio (KSR) metric (Och et al., 2003). This ratio assumes that 67 any number of characters from the beginning of the suggested suffix can be appended to the user prefix using a single keystroke. It computes the ratio of key strokes required to enter the reference interactively to the character count of the reference. Our MT architecture does not permit tuning to KSR. Other methods of quantifying effort in an interactive MT system are more appropriate for user studies than for direct evaluation of MT predictions. For example, measuring pupil dilation, pause duration and frequency (Schilperoord, 1996), mouse-action ratio (Sanchis-Trilles et al., 2008), or source difficulty (Bernth and McCord, 2000) would certainly be relevant for evaluating a full interactive system, but are beyond the scope of this work. 3 Phrase-Based Inference In the log-linear approach to phrase-based translation (Och and Ney, 2004), the distribution of translations e ∈E given a source sentence f ∈F is: p(e|f; w) = X r: src(r)=f tgt(r)=e 1 Z(f) exp h w⊤φ(r) i (1) Here, r is a phrasal derivation with source and target projections src(r) and tgt(r), w ∈Rd is the vector of model parameters, φ(·) ∈Rd is a feature map, and Z(f) is an appropriate normalizing constant. For the same model, the distribution over suffixes es ∈E must also condition on a prefix ep ∈E: p(es|ep, f; w) = X r: src(r)=f tgt(r)=epes 1 Z(f) exp h w⊤φ(r) i (2) In phrase-based decoding, the best scoring derivation r given a source sentence f and weights w is found efficiently by beam search, with one beam for every count of source words covered by a partial derivation (known as the source coverage cardinality). To predict a suffix conditioned on a prefix by constrained decoding, Barrachina et al. (2008) and Ortiz-Martínez et al. (2009) modify the beam search by discarding hypotheses (partial derivations) that do not match the prefix ep. We propose target beam search, a two-step inference procedure. The first step is to produce a phrase-based alignment between the target prefix and a subset of the source words. The target is aligned left-to-right by appending aligned phrase pairs. However, each beam is associated with a target word count, rather than a source word count. Therefore, each beam contains hypotheses for a fixed prefix of target words. Phrasal translation candidates are bundled and sorted with respect to each target phrase rather than each source phrase. Crucially, the source distortion limit is not enforced during alignment, so that long-range reorderings can be analyzed correctly. The second step generates the suffix using standard beam search.2 Once the target prefix is completely aligned, each hypothesis from the final target beam is copied to an appropriate source beam. Search starts with the lowest-count source beam that contains at least one hypothesis. Here, we re-instate the distortion limit with the following modification to avoid search failures: The decoder can always translate any source position before the last source position that was covered in the alignment phase. 3.1 Synthetic Phrase Pairs The phrase pairs available during decoding may not be sufficient to align the target prefix to the source. Pre-compiled phrase tables (Koehn et al., 2003) are typically pruned, and dynamic phrase tables (Levenberg et al., 2010) require sampling for efficient lookup. To improve alignment coverage, we include additional synthetic phrases extracted from word-level alignments between the source sentence and target prefix inferred using unpruned lexical statistics. We first find the intersection of two directional word alignments. The directional alignments are obtained similar to IBM Model 2 (Brown et al., 1993) by aligning the most likely source word to each target word. Given a source sequence f = f1 . . . f|f| and a target sequence e = e1 . . . e|e|, we define the alignment a = a1 . . . a|e|, where ai = j means that ei is aligned to fj. The likelihood is modeled by a single-word lexicon probability that is provided by our translation model and an alignment probability modeled as a Poisson distribution Poisson(k, λ) in the distance to the diagonal. ai = arg max j∈{1,...,|f|} p(ai = j|f, e) (3) p(ai = j|f, e) = p(ei|fj) · p(ai|j) (4) p(ei|fj) = cnt(ei, fj) cnt(fj) (5) p(ai|j) = Poisson(|ai −j|, 1.0) (6) 2We choose cube pruning (Huang and Chiang, 2007) as the beam-filling strategy. 68 Here, cnt(ei, fj) is the count of all word alignments between ei and fj in the training bitext, and cnt(fj) the monolingual occurrence count of fj. We perform standard phrase extraction (Och et al., 1999; Koehn et al., 2003) to obtain our synthetic phrases, whose translation probabilities are again estimated based on the single-word probabilities p(ei|fj) from our translation model. Given a synthetic phrase pair (e, f), the phrase translation probability is computed as p(e|f) = Y 1≤i≤|e| max 1≤j≤|f| p(ei|fj) (7) Additionally, we introduce three indicator features that count the number of synthetic phrase pairs, source words and target words, respectively. 4 Tuning In order to tune the model for suffix prediction, we optimize the weights w in Equation 2 to maximize the metrics introduced in Section 2. Model tuning is performed with AdaGrad (Duchi et al., 2011), an online subgradient method. It features an adaptive learning rate and comes with good theoretical guarantees. See Green et al. (2013) for the details of applying AdaGrad to phrase-based translation. The same model scores both alignment of the prefix and translation of the suffix. However, different feature weights may be appropriate for scoring each step of the inference process. In order to learn different weights for alignment and translation within a unified joint model, we apply the hierarchical adaptation method of Wuebker et al. (2015), which is based on frustratingly easy domain adaptation (FEDA) (Daumé III, 2007). We define three sub-segment domains: prefix, overlap and suffix. The prefix domain contains all phrases that are used for aligning the prefix with the source sentence. Phrases that span both prefix and suffix additionally belong to the overlap domain. Finally, once the prefix has been completely covered, the suffix domain applies to all phrases that are used to translate the remainder of the sentence. The root domain spans the entire phrasal derivation. Formally, given a set of domains D = {root, prefix, overlap, suffix}, each feature is replicated for each domain d ∈D. These replicas can be interpreted as domain-specific “offsets” to the baseline weights. For an original feature vector φ with a set of domains D ⊆D, the replicated feature vector contains |D| copies fd of each feature f ∈φ, one for each d ∈D. fd = ( f, d ∈D 0, otherwise. (8) The weights of the replicated feature space are initialized with 0 except for the root domain, where we copy the baseline weights w. wd = ( w, d is root 0, otherwise. (9) All our phrase-based systems are first tuned without prefixes or domains to maximize Bleu. When tuning for suffix prediction, we keep these baseline weights wroot fixed to maintain baseline translation quality and only update the weights corresponding to the prefix, overlap and suffix domains. 5 Diverse n-best Extraction Consider the interactive MT application setting in which the user is presented with an autocomplete list of alternative translations (Langlais et al., 2000). The user query may be satisfied if the machine predicts the correct completion in its top-n output. However, it is well-known that n-best lists are poor approximations of MT structured output spaces (Macherey et al., 2008; Gimpel et al., 2013). Even very large values of n can fail to produce alternatives that differ in the first words of the suffix, which limits n-best KSR and WPA improvements at test time. For tuning, WPA is often zero for every item on the n-best list, which prevents learning. Fortunately, the prefix can help efficiently enumerate diverse next-word alternatives. If we can find all edges in the decoding lattice that span the prefix ep and suffix es, then we can generate diverse alternatives in precisely the right location in the target. Let G = (V, E) be the search lattice created by decoding, where V are nodes and E are the edges produced by rule applications. For any w ∈V , let parent(w) return v s.t. v, w ∈E, target(w) return the target sequence e defined by following the next pointers from w, and length(w) be the length of the target sequence up to w. During decoding, we set parent pointers and also assign monotonically increasing integer ids to each w. To extract a full sentence completion given an edge v, w ∈E that spans the prefix/suffix boundary, we must find the best path to a goal node efficiently. To do this, we sort V in reverse topological order and set forward pointers from each node v to the 69 Algorithm 1 Diverse n-best list extraction Require: Lattice G = (V, E), prefix length P 1: M = [] ▷Marked nodes 2: for w ∈V in reverse topological order do 3: v = parent(w) ▷v, w ∈E 4: if length(v) ≤P and length(w) > P then 5: Add w to M ▷Mark node 6: end if 7: v.child = v.child ⊕w ▷Child pointer update 8: end for 9: N = [] ▷n-best target strings 10: for m ∈M do 11: Add target(m) to N 12: end for 13: return N child node on the best goal path. During this traversal, we also mark all child nodes of edges that span the prefix/suffix boundary. Finally, we use the parent and child pointers to extract an n-best list of translations. Algorithm 1 shows the full procedure. 6 Neural machine translation Neural machine translation (NMT) models the conditional probability p(e|f) of translating a source sentence f to a target sentence e. In the encoderdecoder NMT framework (Sutskever et al., 2014; Cho et al., 2014), an encoder computes a representation s for each source sentence. From that source representation, the decoder generates a translation one word at a time by maximizing: log p(e|f) = |e| X i=1 log p (ei|e<i, f, s) (10) The individual probabilities in Equation 10 are often parameterized by a recurrent neural network which repeatedly predicts the next word ei given all previous target words e<i. Since this model generates translations by repeatedly predicting next words, it is a natural choice for the sentence completion task. Even in unconstrained decoding, it predicts one word at a time conditioned on the most likely prefix. We modified the state-of-the-art English-German NMT system described in (Luong et al., 2015) to conduct a beam search that constrains the translation to match a fixed prefix.3 As we decode from left to right, the decoder transitions from a constrained prefix decoding mode to unconstrained beam search. In the constrained mode—the next word to predict 3We used the trained models provided by the authors of (Luong et al., 2015) using the codebase at https://github.com/lmthang/nmt.matlab. ei is known—we set the beam size to 1, aggregate the score of predicting ei immediately without having to sort the softmax distribution over all words, and feed ei directly to the next time step. Once the prefix has been consumed, the decoder switches to standard beam search with a larger beam size (12 in our experiments). In this mode, the most probable word ei is passed to the next time step. 7 Experimental Results We evaluate our models and methods for EnglishFrench and English-German on two domains: software and news. The phrase-based systems are built with Phrasal (Green et al., 2014), an open source toolkit. We use a dynamic phrase table (Levenberg et al., 2010) and tune parameters with AdaGrad. All systems have 42 dense baseline features. We align the bitexts with mgiza (Gao and Vogel, 2008) and estimate 5-gram language models (LMs) with KenLM (Heafield et al., 2013). The English-French bilingual training data consists of 4.9M sentence pairs from the Common Crawl and Europarl corpora from WMT 2015 (Bojar et al., 2015). The LM was estimated from the target side of the bitext. For English-German we run large-scale experiments. The bitext contains 19.9M parallel segments collected from WMT 2015 and the OPUS collection (Skadin¸š et al., 2014). The LM was estimated from the target side of the bitext and the monolingual Common Crawl corpus (Buck et al., 2014), altogether 37.2B running words. The software test set includes 10k sentence pairs from the Autodesk post editing corpus4. For the news domain we chose the English-French newstest2014 and English-German newstest2015 sets provided for the WMT 20165 shared task. The translation systems were tuned towards the specific domain, using another 10k segments from the Autodesk data or the newstest2013 data set, respectively. On the English-French tune set we randomly select one target prefix from each sentence pair for rapid experimentation. On all other test and tune sets we select two target prefixes at random.6 The 4https://autodesk.app.box.com/AutodeskPostEditing 5http://www.statmt.org/wmt16 6We briefly experimented with larger sets of prefixes and also exhaustive simulation in tuning, but did not observe significant improvements. 70 selected prefixes remain fixed throughout all experiments. For NMT, we report results both using a single network and an ensemble of eight models using various attention mechanisms (Luong et al., 2015). 7.1 Phrase-based Results Tables 1 and 2 show the main phrase-based results. The baseline system corresponds to constrained beam search, which performed best in (Ortiz-Martínez et al., 2009) and (Barrachina et al., 2008), where it was referred to as phrase-based (PB) and phrase-based model (PBM), respectively. Our target beam search strategy improves all metrics on both test sets. For English-French, we observe absolute improvements of up to 3.2% pxBleu, 11.4% WPA and 10.6% KSR. We experimented with four different prefix-constrained tuning criteria: pxBleu, WPA, #prd, and the linear combination (pxBleu+WPA) 2 . We see that tuning towards prefix decoding increases all metrics. Across our two test sets, the combined metric yielded the most stable results. Here, we obtain gains of up to 3.0% pxBleu, 3.1% WPA and 2.1% KSR. We continue using the linear combination criterion for all subsequent experiments. For English-German—the large-scale setting— we observe similar total gains of up to 3.9% pxBleu, 11.2% WPA and 8.2% KSR. The target beam search procedure contributes the most gain among our various improvements. Table 3 illustrates the differences in the translation output on three example sentences taken from the newstest2015 test set. It is clearly visible that both target beam search and prefix tuning improve the prefix alignment, which results in better translation suffixes. 7.2 Diverse n-best Results To improve recall in interactive MT, the user can be presented with multiple alternative sentence completions (Langlais et al., 2000), which correspond to an n-best list of translation hypotheses generated by the prefix-constrained inference procedure. The diverse extraction scheme introduced in section 5 is particularly designed for next-word prediction recall. Table 4 shows results for 10-best lists. We see that WPA is increased by up to 15.3% by including the 10-best candidates, 11.3% being contributed by our novel diverse n-best extraction. Jointly, target beam search, prefix tuning and diverse n-best extraction lead to an absolute improvement of up to 23.5% over the baseline 10-best oracle. We believe that n = 10 suggestions are the maximum number of candidates that should be presented to a user, but we also ran experiments with n = 3 and n = 5, which would result in an interface with reduced cognitive load. These settings yield 5.5% and 10.0% WPA gains respectively on English-German news. 7.3 Comparison with NMT We compare this phrase-based system to the NMT system described in Section 6 for English-German. Table 5 shows the results. We observe a clear advantage of NMT over our best phrase-based system when comparing WPA. For pxBleu, the phrasebased model outperforms the single neural network system on the Autodesk set, but underperforms the ensemble. This stands in contrast to unconstrained full-sentence translation quality, where the phrasebased system is slightly better than the ensemble. The neural system substantially outperforms the phrase-based system for all metrics in the news domain. In an interactive setting, the system must make predictions in near real-time, so we report average decoding times. We observe a clear time vs. accuracy trade-off; the phrase-based is 10.6 to 31.3 times faster than the single network NMT system and more than 100 times faster than the ensemble. Crucially, the phrase-based system runs on a CPU, while NMT requires a GPU for these speeds. Further, the 10-best oracle WPA of the phrase-based system is higher than the NMT ensemble in both genres. Following the example of Neubig et al. (2015), we performed a manual analysis of the first 100 segments on the newstest2015 data set in order to qualitatively compare the constrained translations produced by the phrase-based and single network NMT systems. We observe four main error categories in which the translations differ, for which we have given examples in Table 6. NMT is generally better with long-range verb reorderings, which often lead to the verb being dropped by the phrasebased system. E.g. the word erscheinen in Ex. 1 and veröffentlicht in Ex. 2 are missing in the phrasebased translation. Also, the NMT engine often produces better German grammar and morphological agreement, e.g. kein vs. keine in Ex. 3 or the verb conjugations in Ex. 4. Especially interesting is that the NMT system generated the negation nicht in the second half of Ex. 3. This word does not have 71 autodesk newstest2014 tuning criterion pxBleu WPA #prd KSR pxBleu WPA #prd KSR baseline Bleu 57.9 41.1 1.49 57.8 40.9 38.0 0.96 61.7 target beam search Bleu 61.0 47.2 1.74 50.3 44.1 49.4 1.35 51.1 + prefix tuning (pxBleu+WPA) 2 64.0 50.3 1.95 48.2 44.7 50.9 1.40 50.5 pxBleu 64.0 50.1 1.95 48.2 44.9 50.3 1.38 50.8 WPA 62.4 50.2 1.88 48.1 43.3 50.5 1.34 51.7 #prd 63.8 49.7 1.95 48.4 44.1 50.3 1.37 50.7 Table 1: Phrase-based results on the English-French task. We compare the baseline with the target beam search proposed in this work. Prefix tuning is evaluated with four different tuning criteria. autodesk newstest2015 pxBleu WPA #prd KSR pxBleu WPA #prd KSR baseline 58.5 37.8 1.54 64.7 32.1 28.5 0.61 72.7 target beam search 61.2 44.6 1.78 58.0 36.0 39.7 0.84 64.5 + prefix tuning 62.2 46.0 1.85 57.2 36.0 41.2 0.88 63.7 Table 2: Phrase-based results on English-German, tuned to the linear combination of pxBleu and WPA. a direct correspondence in the English source, but makes the sentence feel more natural in German. On the other hand, NMT sometimes drops content words, as in Ex. 5, where middle-class jobs, Minnesota and Progressive Caucus co-chair remain entirely untranslated by NMT. Finally, incorrect prefix alignment sometimes leads to incorrect portions of the source sentence being translated after the prefix or even superfluous output by the phrase-based engine, like , die in Ex. 6. Table 7 summarizes how many times each of the systems produced a better output than the other, broken down by category. 8 Related Work Target-mediated interactive MT was first proposed by Foster et al. (1997) and then further developed within the TransType (Langlais et al., 2000) and TransType2 (Esteban et al., 2004; Barrachina et al., 2008) projects. In TransType2, several different approaches were evaluated. Barrachina et al. (2008) reports experimental results that show the superiority of phrase-based models over stochastic finite state transducers and alignment templates, which were extended for the interactive translation paradigm by Och et al. (2003). Ortiz-Martínez et al. (2009) confirm this observation, and find that their own suggested method using partial statistical phrase-based alignments performs on a similar level on most tasks. The approach using phrase-based models is used as the baseline in this paper. In order to make the interaction sufficiently responsive, Barrachina et al. (2008) resort to search within a word graph, which is generated by the translation decoder without constraints at the beginning of the workflow. A given prefix is then matched to the paths within the word graph. This approach was recently refined with more permissive matching criteria by Koehn et al. (2014), who report strong improvements in prediction accuracy. Instead of using a word graph, it is also possible to perform a new search for every interaction (Bender et al., 2005; Ortiz-Martínez et al., 2009), which is the approach we have adopted. Ortiz-Martínez et al. (2009) perform the most similar study to our work in the literature. The authors also define prefix decoding as a two-stage process, but focus on investigating different smoothing techniques, while our work includes new metrics, models, and inference. 9 Conclusion We have shown that both phrase-based and neural translation approaches can be used to complete partial translations. The recurrent neural system provides higher word prediction accuracy, but requires lengthy inference on a GPU. The phrase-based system is fast, produces diverse n-best lists, and provides reasonable prefix-Bleu performance. The complementary strengths of both systems suggest future work in combining these techniques. We have also shown decisively that simply performing constrained decoding for a phrase-based model is not an effective approach to the task of completing translations. Instead, the learning objective, model, and inference procedure should all 72 1. source Suddenly I’m at the National Theatre and I just couldn’t quite believe it. reference "Plötzlich war ich im Nationaltheater und ich konnte es kaum glauben. baseline "Plötzlich war ich im Nationaltheater bin und ich konnte es einfach nicht glauben. target beam search "Plötzlich war ich im National Theatre und das konnte ich nicht ganz glauben. + prefix tuning "Plötzlich war ich im National Theatre, und ich konnte es einfach nicht glauben. 2. source "A little voice inside me said, ’You’re going to have to do 10 minutes while they fix the computer." " reference "Eine kleine Stimme sagte mir "Du musst jetzt 10 Minuten überbrücken, während sie den Computer reparieren." " baseline "Eine kleine Stimme sagte mir "Du musst jetzt 10 Minuten überbrücken, sie legen die müssen, während der Computer." target beam search "Eine kleine Stimme sagte mir "Du musst jetzt 10 Minuten überbrücken zu tun, während sie den Computer reparieren". + prefix tuning "Eine kleine Stimme sagte mir "Du musst jetzt 10 Minuten überbrücken, während sie den Computer reparieren." " 3. source Yemeni media report that there is traffic chaos in the capital. reference Jemenitische Medien berichten von einem Verkehrschaos in der Hauptstadt. baseline Jemenitische Medien berichten von einem Verkehrschaos ist der Verkehr in der Hauptstadt. target beam search Jemenitische Medien berichten von einem Verkehrschaos gibt es in der Hauptstadt. + prefix tuning Jemenitische Medien berichten von einem Verkehrschaos in der Hauptstadt. Table 3: Translation examples from the English-German newstest2015 test set. We compare the prefix decoding output of the baseline against target beam search both with and without prefix tuning. The prefix is printed in italics. English-French English-German autodesk newstest2014 autodesk newstest2015 WPA KSR WPA KSR WPA KSR WPA KSR baseline 1-best 41.1 57.8 38.0 61.7 37.8 64.7 28.5 72.7 10-best 48.6 53.3 42.7 58.5 43.9 60.2 33.4 69.5 target beam search 1-best 50.3 48.2 50.9 50.5 46.0 57.2 41.2 63.7 10-best 56.8 43.7 54.9 47.3 51.1 53.2 46.6 60.3 10-best diverse 64.5 39.1 66.2 41.4 57.3 48.4 55.5 54.5 Table 4: Oracle results on the English-French and English-German tasks. We compare the single best result with oracle scores on 10-best lists with standard and diverse n-best extraction on both target beam search with prefix tuning and the phrase-based baseline system. autodesk newstest2015 English-German Bleu pxBleu WPA secs / segment Bleu pxBleu WPA secs / segment target beam search 44.5 62.2 46.0 0.051 22.4 36.0 41.2 0.089 10-best diverse 65.1 57.3 39.5 55.5 NMT single 40.6 61.2 52.3 1.6 23.2 39.2 50.4 1.3 NMT ensemble 44.3 64.7 54.9 7.7 26.3 42.1 53.0 10.0 Table 5: English-German results for the phrase-based system with target beam search and tuned to a combined metric, compared with the recurrent neural translation system. The 10-best diverse line contains oracle scores from a 10-best list; all other scores are computed for a single suffix prediction per example. We also report unconstrained full-sentence Bleu scores. The phrase-based timing results include prefix alignment and synthetic phrase extraction. be tailored to the task. The combination of these changes can adapt a phrase-based translation system to perform prefix alignment and suffix prediction jointly with fewer search errors and greater accuracy for the critical first words of the suffix. In light of the dramatic improvements in prediction quality that result from the techniques we have described, we look forward to investigating the effect on user experience for interactive translation systems that employ these methods. 73 1. source He is due to appear in Karratha Magistrates Court on September 23. reference Er soll am 23. September vor dem Amtsgericht in Karratha erscheinen. phrase-based Er ist aufgrund der in Karratha Magistrates Court am 23. September. NMT Er wird am 23. September in Karratah Magistrates Court erscheinen. 2. source The research, funded by the [...], will be published today in the Medical Journal of Australia. reference Die von [...] finanzierte Studie wird heute im Medical Journal of Australia veröffentlicht. phrase-based Die von [...] finanzierte Studie wird heute im Medical Journal of Australia. NMT Die von [...] finanzierte Studie wird heute im Medical Journal of Australia veröffentlicht. 3. source But it is certainly not a radical initiative - at least by American standards. reference Aber es ist mit Sicherheit keine radikale Initiative - jedenfalls nicht nach amerikanischen Standards. phrase-based Aber es ist sicherlich kein radikale Initiative - zumindest von den amerikanischen Standards. NMT Aber es ist gewiss keine radikale Initiative - zumindest nicht nach amerikanischem Maßstab. 4. source Now everyone knows that the labor movement did not diminish the strength of the nation but enlarged it. reference Jetzt wissen alle, dass die Arbeiterbewegung die Stärke der Nation nicht einschränkte, sondern sie vergrößerte. phrase-based Jetzt wissen alle, dass die Arbeiterbewegung die Stärke der Nation nicht schmälern, aber vergrößert . NMT Jetzt wissen alle, dass die Arbeiterbewegung die Stärke der Nation nicht verringert, sondern erweitert hat. 5. source "As go unions, so go middle-class jobs," says Ellison, the Minnesota Democrat who serves as a Congressional Progressive Caucus co-chair. reference "So wie Gewerkschaften sterben, sterben auch die Mittelklassejobs," sagte Ellison, ein Demokrat aus Minnesota und stellvertretender Vorsitzender des Progressive Caucus im Kongress. phrase-based "So wie Gewerkschaften sterben, so Mittelklasse-Jobs", sagt Ellison, der Minnesota Demokrat, dient als Congressional Progressive Caucus Mitveranstalter. NMT "So wie Gewerkschaften sterben, so gehen die gehen," sagt Ellison, der Liberalen, der als Kongresses des eine dient. 6. source The opposition politician, Imran Khan, accuses Prime Minister Sharif of rigging the parliamentary elections, which took place in May last year. reference Der Oppositionspolitiker Imran Khan wirft Premier Sharif vor, bei der Parlamentswahl im Mai vergangenen Jahres betrogen zu haben. phrase-based Der Oppositionspolitiker Imran Khan wirft Premier Sharif vor, bei der Parlamentswahl im Mai vergangenen Jahres betrogen zu haben. , die NMT Der Oppositionspolitiker Imran Khan wirft Premier Sharif vor, bei der Parlamentswahl im Mai vergangenen Jahres betrogen zu haben. Table 6: Example sentences from the English-German newstest2015 test set. We compare the prefix decoding output of phrase-based target beam search against the single network neural machine translation (NMT) engine, printing the prefix in italics. The examples illustrate the four error categories missing verb (Ex. 1 and 2), grammar / morphology (Ex. 3 and 4), missing content words (Ex. 5) and alignment (Ex. 6). #better phrase-based NMT missing verb 1 19 grammar / morphology 0 15 missing content words 17 3 alignment 0 6 Table 7: Result of the manual analysis on the first 100 segments of the English-German newstest2015 test set. For each of the four error categories we count how many times one of the systems produced a better output. Acknowledgments Minh-Thang Luong was partially supported by NSF Award IIS-1514268 and partially supported by a gift from Bloomberg L.P. References Sergio Barrachina, Oliver Bender, Francisco Casacuberta, Jorge Civera, Elsa Cubel, Shahram Khadivi, et al. 2008. Statistical approaches to computerassisted translation. Computational Linguistics, 35(1):3–28. Oliver Bender, Saša Hasan, David Vilar, Richard Zens, and Hermann Ney. 2005. Comparison of generation strategies for interactive machine translation. In EAMT. Arendse Bernth and Michael C. McCord. 2000. The effect of source analysis on translation confidence. In AMTA. Ondřej Bojar, Rajen Chatterjee, Christian Federmann, Barry Haddow, Matthias Huck, Chris Hokamp, et al. 2015. Findings of the 2015 Workshop on Statistical Machine Translation. In WMT. Peter F. Brown, Stephan A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The 74 Mathematics of Statistical Machine Translation: Parameter Estimation. Computational Linguistics, 19(2):263–311. Christian Buck, Kenneth Heafield, and Bas van Ooyen. 2014. N-gram counts and language models from the common crawl. In LREC. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In EMNLP. Hal Daumé III. 2007. Frustratingly easy domain adaptation. In ACL. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159, July. José Esteban, José Lorenzo, Antonio S. Valderrábanos, and Guy Lapalme. 2004. TransType2 - an innovative computer-assisted translation system. In ACL. George Foster, Pierre Isabelle, and Pierre Plamondon. 1997. Target-Text Mediated Interactive Machine Translation. Machine Translation, 12(1–2):175– 194. Qin Gao and Stephan Vogel. 2008. Parallel implementations of word alignment tool. In Software Engineering, Testing, and Quality Assurance for Natural Language Processing. Kevin Gimpel, Dhruv Batra, Chris Dyer, and Gregory Shakhnarovich. 2013. A systematic exploration of diversity in machine translation. In EMNLP. Spence Green, Sida Wang, Daniel Cer, and Christopher D. Manning. 2013. Fast and adaptive online training of feature-rich translation models. In ACL. Spence Green, Daniel Cer, and Christopher D. Manning. 2014. Phrasal: A toolkit for new directions in statistical machine translation. In WMT. Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn. 2013. Scalable modified Kneser-Ney language model estimation. In ACL. Liang Huang and David Chiang. 2007. Forest rescoring: Faster decoding with integrated language models. In ACL. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In NAACL. Philipp Koehn, Chara Tsoukala, and Herve SaintAmand. 2014. Refinements to interactive translation prediction based on search graphs. In ACL. Philippe Langlais, George Foster, and Guy Lapalme. 2000. TransType: a Computer-Aided Translation Typing System. In NAACL Workshop on Embedded Machine Translation Systems. Abby Levenberg, Chris Callison-Burch, and Miles Osborne. 2010. Stream-based translation models for statistical machine translation. In NAACL. Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attentionbased neural machine translation. In EMNLP. Wolfgang Macherey, Franz Josef Och, Ignacio Thayer, and Jakop Uszkoreit. 2008. Lattice-based minimum error rate training for statistical machine translation. In EMNLP. Graham Neubig, Makoto Morishita, and Satoshi Nakamura. 2015. Neural reranking improves subjective quality of machine translation: NAIST at WAT2015. In 2nd Workshop on Asian Translation (WAT2015). Franz Josef Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics, 30(4):417–450. Franz Josef Och, Christoph Tillmann, and Hermann Ney. 1999. Improved alignment models for statistical machine translation. In EMNLP. Franz Josef Och, Richard Zens, and Hermann Ney. 2003. Efficient search for interactive statistical machine translation. In EACL. Daniel Ortiz-Martínez, Ismael García-Varea, and Francisco Casacuberta. 2009. Interactive machine translation based on partial statistical phrase-based alignments. In RANLP. Daniel Ortiz-Martínez, Ismael García-Varea, and Francisco Casacuberta. 2010. Online learning for interactive statistical machine translation. In NAACL. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In ACL. Germán Sanchis-Trilles, Daniel Ortiz-Martínez, Jorge Civera, Francisco Casacuberta, Enrique Vidal, and Hieu Hoang. 2008. Improving interactive machine translation via mouse actions. In EMNLP. Joost Schilperoord. 1996. It’s about Time: Temporal Aspects of Cognitive Processes in Text Production. Rodopi. Raivis Skadin¸š, Jörg Tiedemann, Roberts Rozis, and Daiga Deksne. 2014. Billions of parallel words for free: Building and using the EU bookshop corpus. In LREC. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In NIPS. Joern Wuebker, Spence Green, and John DeNero. 2015. Hierarchical incremental adaptation for statistical machine translation. In EMNLP. 75
2016
7
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 737–746, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Universal Dependencies for Learner English Yevgeni Berzak CSAIL MIT [email protected] Jessica Kenney EECS & Linguistics MIT [email protected] Carolyn Spadine Linguistics MIT [email protected] Jing Xian Wang EECS MIT [email protected] Lucia Lam MECHE MIT [email protected] Keiko Sophie Mori Linguistics MIT [email protected] Sebastian Garza Linguistics MIT [email protected] Boris Katz CSAIL MIT [email protected] Abstract We introduce the Treebank of Learner English (TLE), the first publicly available syntactic treebank for English as a Second Language (ESL). The TLE provides manually annotated POS tags and Universal Dependency (UD) trees for 5,124 sentences from the Cambridge First Certificate in English (FCE) corpus. The UD annotations are tied to a pre-existing error annotation of the FCE, whereby full syntactic analyses are provided for both the original and error corrected versions of each sentence. Further on, we delineate ESL annotation guidelines that allow for consistent syntactic treatment of ungrammatical English. Finally, we benchmark POS tagging and dependency parsing performance on the TLE dataset and measure the effect of grammatical errors on parsing accuracy. We envision the treebank to support a wide range of linguistic and computational research on second language acquisition as well as automatic processing of ungrammatical language1. 1 Introduction The majority of the English text available worldwide is generated by non-native speakers (Crystal, 2003). Such texts introduce a variety of challenges, most notably grammatical errors, and are of paramount importance for the scientific study of language acquisition as well as for NLP. Despite the ubiquity of non-native English, there is 1The treebank is available at universaldependencies.org. The annotation manual used in this project and a graphical query engine are available at esltreebank.org. currently no publicly available syntactic treebank for English as a Second Language (ESL). To address this shortcoming, we present the Treebank of Learner English (TLE), a first of its kind resource for non-native English, containing 5,124 sentences manually annotated with POS tags and dependency trees. The TLE sentences are drawn from the FCE dataset (Yannakoudakis et al., 2011), and authored by English learners from 10 different native language backgrounds. The treebank uses the Universal Dependencies (UD) formalism (De Marneffe et al., 2014; Nivre et al., 2016), which provides a unified annotation framework across different languages and is geared towards multilingual NLP (McDonald et al., 2013). This characteristic allows our treebank to support computational analysis of ESL using not only English based but also multilingual approaches which seek to relate ESL phenomena to native language syntax. While the annotation inventory and guidelines are defined by the English UD formalism, we build on previous work in learner language analysis (Dıaz-Negrillo et al., 2010; Dickinson and Ragheb, 2013) to formulate an additional set of annotation conventions aiming at a uniform treatment of ungrammatical learner language. Our annotation scheme uses a two-layer analysis, whereby a distinct syntactic annotation is provided for the original and the corrected version of each sentence. This approach is enabled by a pre-existing error annotation of the FCE (Nicholls, 2003) which is used to generate an error corrected variant of the dataset. Our inter-annotator agreement results provide evidence for the ability of the annotation scheme to support consistent annotation of ungrammatical structures. 737 Finally, a corpus that is annotated with both grammatical errors and syntactic dependencies paves the way for empirical investigation of the relation between grammaticality and syntax. Understanding this relation is vital for improving tagging and parsing performance on learner language (Geertzen et al., 2013), syntax based grammatical error correction (Tetreault et al., 2010; Ng et al., 2014), and many other fundamental challenges in NLP. In this work, we take the first step in this direction by benchmarking tagging and parsing accuracy on our dataset under different training regimes, and obtaining several estimates for the impact of grammatical errors on these tasks. To summarize, this paper presents three contributions. First, we introduce the first large scale syntactic treebank for ESL, manually annotated with POS tags and universal dependencies. Second, we describe a linguistically motivated annotation scheme for ungrammatical learner English and provide empirical support for its consistency via inter-annotator agreement analysis. Third, we benchmark a state of the art parser on our dataset and estimate the influence of grammatical errors on the accuracy of automatic POS tagging and dependency parsing. The remainder of this paper is structured as follows. We start by presenting an overview of the treebank in section 2. In sections 3 and 4 we provide background information on the annotation project, and review the main annotation stages leading to the current form of the dataset. The ESL annotation guidelines are summarized in section 5. Inter-annotator agreement analysis is presented in section 6, followed by parsing experiments in section 7. Finally, we review related work in section 8 and present the conclusion in section 9. 2 Treebank Overview The TLE currently contains 5,124 sentences (97,681 tokens) with POS tag and dependency annotations in the English Universal Dependencies (UD) formalism (De Marneffe et al., 2014; Nivre et al., 2016). The sentences were obtained from the FCE corpus (Yannakoudakis et al., 2011), a collection of upper intermediate English learner essays, containing error annotations with 75 error categories (Nicholls, 2003). Sentence level segmentation was performed using an adaptation of the NLTK sentence tokenizer2. Under-segmented 2http://www.nltk.org/api/nltk.tokenize.html sentences were split further manually. Word level tokenization was generated using the Stanford PTB word tokenizer3. The treebank represents learners with 10 different native language backgrounds: Chinese, French, German, Italian, Japanese, Korean, Portuguese, Spanish, Russian and Turkish. For every native language, we randomly sampled 500 automatically segmented sentences, under the constraint that selected sentences have to contain at least one grammatical error that is not punctuation or spelling. The TLE annotations are provided in two versions. The first version is the original sentence authored by the learner, containing grammatical errors. The second, corrected sentence version, is a grammatical variant of the original sentence, generated by correcting all the grammatical errors in the sentence according to the manual error annotation provided in the FCE dataset. The resulting corrected sentences constitute a parallel corpus of standard English. Table 1 presents basic statistics of both versions of the annotated sentences. original corrected sentences 5,124 5,124 tokens 97,681 98,976 sentence length 19.06 (std 9.47) 19.32 (std 9.59) errors per sentence 2.67 (std 1.9) authors 924 native languages 10 Table 1: Statistics of the TLE. Standard deviations are denoted in parenthesis. To avoid potential annotation biases, the annotations of the treebank were created manually from scratch, without utilizing any automatic annotation tools. To further assure annotation quality, each annotated sentence was reviewed by two additional annotators. To the best of our knowledge, TLE is the first large scale English treebank constructed in a completely manual fashion. 3 Annotator Training The treebank was annotated by six students, five undergraduates and one graduate. Among the undergraduates, three are linguistics majors and two are engineering majors with a linguistic minor. The graduate student is a linguist specializing in syntax. An additional graduate student in NLP participated in the final debugging of the dataset. 3http://nlp.stanford.edu/software/tokenizer.shtml 738 Prior to annotating the treebank sentences, the annotators were trained for about 8 weeks. During the training, the annotators attended tutorials on dependency grammars, and learned the English UD guidelines4, the Penn Treebank POS guidelines (Santorini, 1990), the grammatical error annotation scheme of the FCE (Nicholls, 2003), as well as the ESL guidelines described in section 5 and in the annotation manual. Furthermore, the annotators completed six annotation exercises, in which they were required to annotate POS tags and dependencies for practice sentences from scratch. The exercises were done individually, and were followed by group meetings in which annotation disagreements were discussed and resolved. Each of the first three exercises consisted of 20 sentences from the UD gold standard for English, the English Web Treebank (EWT) (Silveira et al., 2014). The remaining three exercises contained 20-30 ESL sentences from the FCE. Many of the ESL guidelines were introduced or refined based on the disagreements in the ESL practice exercises and the subsequent group discussions. Several additional guidelines were introduced in the course of the annotation process. During the training period, the annotators also learned to use a search tool that enables formulating queries over word and POS tag sequences as regular expressions and obtaining their annotation statistics in the EWT. After experimenting with both textual and graphical interfaces for performing the annotations, we converged on a simple text based format described in section 4.1, where the annotations were filled in using a spreadsheet or a text editor, and tested with a script for detecting annotation typos. The annotators continued to meet and discuss annotation issues on a weekly basis throughout the entire duration of the project. 4 Annotation Procedure The formation of the treebank was carried out in four steps: annotation, review, disagreement resolution and targeted debugging. 4.1 Annotation In the first stage, the annotators were given sentences for annotation from scratch. We use a CoNLL based textual template in which each word is annotated in a separate line. Each line contains 6 columns, the first of which has the word index 4http://universaldependencies.org/#en (IND) and the second the word itself (WORD). The remaining four columns had to be filled in with a Universal POS tag (UPOS), a Penn Treebank POS tag (POS), a head word index (HIND) and a dependency relation (REL) according to version 1 of the English UD guidelines. The annotation section of the sentence is preceded by a metadata header. The first field in this header, denoted with SENT, contains the FCE error coded version of the sentence. The annotators were instructed to verify the error annotation, and add new error annotations if needed. Corrections to the sentence segmentation are specified in the SEGMENT field5. Further down, the field TYPO is designated for literal annotation of spelling errors and ill formed words that happen to form valid words (see section 5.2). The example below presents a pre-annotated original sentence given to an annotator. #SENT=That time I had to sleep in <ns type= "MD"><c>a</c></ns> tent. #SEGMENT= #TYPO= #IND WORD UPOS POS HIND REL 1 That 2 time 3 I 4 had 5 to 6 sleep 7 in 8 tent 9 . Upon completion of the original sentence, the annotators proceeded to annotate the corrected sentence version. To reduce annotation time, annotators used a script that copies over annotations from the original sentence and updates head indices of tokens that appear in both sentence versions. Head indices and relation labels were filled in only if the head word of the token appeared in both the original and corrected sentence versions. Tokens with automatically filled annotations included an additional # sign in a seventh column of each word’s annotation. The # signs had to be removed, and the corresponding annotations either approved or changed as appropriate. Tokens that did not appear in the original sentence version were annotated from scratch. 5The released version of the treebank splits the sentences according to the markings in the SEGMENT field when those apply both to the original and corrected versions of the sentence. Resulting segments without grammatical errors in the original version are currently discarded. 739 4.2 Review All annotated sentences were randomly assigned to a second annotator (henceforth reviewer), in a double blind manner. The reviewer’s task was to mark all the annotations that they would have annotated differently. To assist the review process, we compiled a list of common annotation errors, available in the released annotation manual. The annotations were reviewed using an active editing scheme in which an explicit action was required for all the existing annotations. The scheme was introduced to prevent reviewers from overlooking annotation issues due to passive approval. Specifically, an additional # sign was added at the seventh column of each token’s annotation. The reviewer then had to either “sign off” on the existing annotation by erasing the # sign, or provide an alternative annotation following the # sign. 4.3 Disagreement Resolution In the final stage of the annotation process all annotator-reviewer disagreements were resolved by a third annotator (henceforth judge), whose main task was to decide in favor of the annotator or the reviewer. Similarly to the review process, the judging task was carried out in a double blind manner. Judges were allowed to resolve annotatorreviewer disagreements with a third alternative, as well as introduce new corrections for annotation issues overlooked by the reviewers. Another task performed by the judges was to mark acceptable alternative annotations for ambiguous structures determined through review disagreements or otherwise present in the sentence. These annotations were specified in an additional metadata field called AMBIGUITY. The ambiguity markings are provided along with the resolved version of the annotations. 4.4 Final Debugging After applying the resolutions produced by the judges, we queried the corpus with debugging tests for specific linguistics constructions. This additional testing phase further reduced the number of annotation errors and inconsistencies in the treebank. Including the training period, the treebank creation lasted over a year, with an aggregate of more than 2,000 annotation hours. 5 Annotation Scheme for ESL Our annotations use the existing inventory of English UD POS tags and dependency relations, and follow the standard UD annotation guidelines for English. However, these guidelines were formulated with grammatical usage of English in mind and do not cover non canonical syntactic structures arising due to grammatical errors6. To encourage consistent and linguistically motivated annotation of such structures, we formulated a complementary set of ESL annotation guidelines. Our ESL annotation guidelines follow the general principle of literal reading, which emphasizes syntactic analysis according to the observed language usage. This strategy continues a line of work in SLA which advocates for centering analysis of learner language around morpho-syntactic surface evidence (Ragheb and Dickinson, 2012; Dickinson and Ragheb, 2013). Similarly to our framework, which includes a parallel annotation of corrected sentences, such strategies are often presented in the context of multi-layer annotation schemes that also account for error corrected sentence forms (Hirschmann et al., 2007; DıazNegrillo et al., 2010; Rosen et al., 2014). Deploying a strategy of literal annotation within UD, a formalism which enforces cross-linguistic consistency of annotations, will enable meaningful comparisons between non-canonical structures in English and canonical structures in the author’s native language. As a result, a key novel characteristic of our treebank is its ability to support crosslingual studies of learner language. 5.1 Literal Annotation With respect to POS tagging, literal annotation implies adhering as much as possible to the observed morphological forms of the words. Syntactically, argument structure is annotated according to the usage of the word rather than its typical distribution in the relevant context. The following list of conventions defines the notion of literal reading for some of the common non canonical structures associated with grammatical errors. Argument Structure Extraneous prepositions We annotate all nominal dependents introduced by extraneous prepositions 6The English UD guidelines do address several issues encountered in informal genres, such as the relation “goeswith”, which is used for fragmented words resulting from typos. 740 as nominal modifiers. In the following sentence, “him” is marked as a nominal modifier (nmod) instead of an indirect object (iobj) of “give”. #SENT=...I had to give <ns type="UT"><i>to</i> </ns> him water... ... 21 I PRON PRP 22 nsubj 22 had VERB VBD 5 parataxis 23 to PART TO 24 mark 24 give VERB VB 22 xcomp 25 to ADP IN 26 case 26 him PRON PRP 24 nmod 27 water NOUN NN 24 dobj ... Omitted prepositions We treat nominal dependents of a predicate that are lacking a preposition as arguments rather than nominal modifiers. In the example below, “money” is marked as a direct object (dobj) instead of a nominal modifier (nmod) of “ask”. As “you” functions in this context as a second argument of “ask”, it is annotated as an indirect object (iobj) instead of a direct object (dobj). #SENT=...I have to ask you <ns type="MT"> <c>for</c></ns> the money <ns type= "RT"> <i>of</i><c>for</c></ns> the tickets back. ... 12 I PRON PRP 13 nsubj 13 have VERB VBP 2 conj 14 to PART TO 15 mark 15 ask VERB VB 13 xcomp 16 you PRON PRP 15 iobj 17 the DET DT 18 det 18 money NOUN NN 15 dobj 19 of ADP IN 21 case 20 the DET DT 21 det 21 tickets NOUN NNS 18 nmod 22 back ADV RB 15 advmod 23 . PUNCT . 2 punct Tense Cases of erroneous tense usage are annotated according to the morphological tense of the verb. For example, below we annotate “shopping” with present participle VBG, while the correction “shop” is annotated in the corrected version of the sentence as VBP. #SENT=...when you <ns type="TV"><i>shopping</i> <c>shop</c></ns>... ... 4 when ADV WRB 6 advmod 5 you PRON PRP 6 nsubj 6 shopping VERB VBG 12 advcl ... Word Formation Erroneous word formations that are contextually plausible and can be assigned with a PTB tag are annotated literally. In the following example, “stuffs” is handled as a plural count noun. #SENT=...into fashionable <ns type="CN"> <i>stuffs</i><c>stuff</c></ns>... ... 7 into ADP IN 9 case 8 fashionable ADJ JJ 9 amod 9 stuffs NOUN NNS 2 ccomp ... Similarly, in the example below we annotate “necessaryiest” as a superlative. #SENT=The necessaryiest things... 1 The DET DT 3 det 2 necessaryiest ADJ JJS 3 amod 3 things NOUN NNS 0 root ... 5.2 Exceptions to Literal Annotation Although our general annotation strategy for ESL follows literal sentence readings, several types of word formation errors make such readings uninformative or impossible, essentially forcing certain words to be annotated using some degree of interpretation (Ros´en and De Smedt, 2010). We hence annotate the following cases in the original sentence according to an interpretation of an intended word meaning, obtained from the FCE error correction. Spelling Spelling errors are annotated according to the correctly spelled version of the word. To support error analysis of automatic annotation tools, misspelled words that happen to form valid words are annotated in the metadata field TYPO for POS tags with respect to the most common usage of the misspelled word form. In the example below, the TYPO field contains the typical POS annotation of “where”, which is clearly unintended in the context of the sentence. #SENT=...we <ns type="SX"><i>where</i> <c>were</c></ns> invited to visit... #TYPO=5 ADV WRB ... 4 we PRON PRP 6 nsubjpass 5 where AUX VBD 6 auxpass 6 invited VERB VBN 0 root 7 to PART TO 8 mark 8 visit VERB VB 6 xcomp ... Word Formation Erroneous word formations that cannot be assigned with an existing PTB tag are annotated with respect to the correct word form. #SENT=I am <ns type="IV"><i>writting</i> <c>writing</c></ns>... 1 I PRON PRP 3 nsubj 2 am AUX VBP 3 aux 3 writting VERB VBG 0 root ... In particular, ill formed adjectives that have a plural suffix receive a standard adjectival POS tag. When applicable, such cases also receive an additional marking for unnecessary agreement in the error annotation using the attribute “ua”. #SENT=...<ns type="IJ" ua=true> <i>interestings</i><c>interesting</c></ns> things... 741 ... 6 interestings ADJ JJ 7 amod 7 things NOUN NNS 3 dobj ... Wrong word formations that result in a valid, but contextually implausible word form are also annotated according to the word correction. In the example below, the nominal form “sale” is likely to be an unintended result of an ill formed verb. Similarly to spelling errors that result in valid words, we mark the typical literal POS annotation in the TYPO metadata field. #SENT=...they do not <ns type="DV"><i>sale</i> <c>sell</c></ns> them... #TYPO=15 NOUN NN ... 12 they PRON PRP 15 nsubj 13 do AUX VBP 15 aux 14 not PART RB 15 neg 15 sale VERB VB 0 root 16 them PRON PRP 15 dobj ... Taken together, our ESL conventions cover many of the annotation challenges related to grammatical errors present in the TLE. In addition to the presented overview, the complete manual of ESL guidelines used by the annotators is publicly available. The manual contains further details on our annotation scheme, additional annotation guidelines and a list of common annotation errors. We plan to extend and refine these guidelines in future releases of the treebank. 6 Editing Agreement We utilize our two step review process to estimate agreement rates between annotators7. We measure agreement as the fraction of annotation tokens approved by the editor. Table 2 presents the agreement between annotators and reviewers, as well as the agreement between reviewers and the judges. Agreement measurements are provided for both the original the corrected versions of the dataset. Overall, the results indicate a high agreement rate in the two editing tasks. Importantly, the gap between the agreement on the original and corrected sentences is small. Note that this result is obtained despite the introduction of several ESL annotation guidelines in the course of the annotation process, which inevitably increased the number of edits related to grammatical errors. We interpret this outcome as evidence for the effectiveness of the ESL annotation scheme in supporting consistent annotations of learner language. 7All experimental results on agreement and parsing exclude punctuation tokens. Annotator-Reviewer UPOS POS HIND REL original 98.83 98.35 97.74 96.98 corrected 99.02 98.61 97.97 97.20 Reviewer-Judge original 99.72 99.68 99.37 99.15 corrected 99.80 99.77 99.45 99.28 Table 2: Inter-annotator agreement on the entire TLE corpus. Agreement is measured as the fraction of tokens that remain unchanged after an editing round. The four evaluation columns correspond to universal POS tags, PTB POS tags, unlabeled attachment, and dependency labels. Cohen’s Kappa scores (Cohen, 1960) for POS tags and dependency labels in all evaluation conditions are above 0.96. 7 Parsing Experiments The TLE enables studying parsing for learner language and exploring relationships between grammatical errors and parsing performance. Here, we present parsing benchmarks on our dataset, and provide several estimates for the extent to which grammatical errors degrade the quality of automatic POS tagging and dependency parsing. Our first experiment measures tagging and parsing accuracy on the TLE and approximates the global impact of grammatical errors on automatic annotation via performance comparison between the original and error corrected sentence versions. In this, and subsequent experiments, we utilize version 2.2 of the Turbo tagger and Turbo parser (Martins et al., 2013), state of the art tools for statistical POS tagging and dependency parsing. Table 3 presents tagging and parsing results on a test set of 500 TLE sentences (9,591 original tokens, 9,700 corrected tokens). Results are provided for three different training regimes. The first regime uses the training portion of version 1.3 of the EWT, the UD English treebank, containing 12,543 sentences (204,586 tokens). The second training mode uses 4,124 training sentences (78,541 original tokens, 79,581 corrected tokens) from the TLE corpus. In the third setup we combine these two training corpora. The remaining 500 TLE sentences (9,549 original tokens, 9,695 corrected tokens) are allocated to a development set, not used in this experiment. Parsing of the test sentences was performed on predicted POS tags. The EWT training regime, which uses out of domain texts written in standard English, provides the lowest performance on all the evaluation met742 Test set Train Set UPOS POS UAS LA LAS TLEorig EWT 91.87 94.28 86.51 88.07 81.44 TLEcorr EWT 92.9 95.17 88.37 89.74 83.8 TLEorig TLEorig 95.88 94.94 87.71 89.26 83.4 TLEcorr TLEcorr 96.92 95.17 89.69 90.92 85.64 TLEorig EWT+TLEorig 93.33 95.77 90.3 91.09 86.27 TLEcorr EWT+TLEcorr 94.27 96.48 92.15 92.54 88.3 Table 3: Tagging and parsing results on a test set of 500 sentences from the TLE corpus. EWT is the English UD treebank. TLEorig are original sentences from the TLE. TLEcorr are the corresponding error corrected sentences. rics. An additional factor which negatively affects performance in this regime are systematic differences in the EWT annotation of possessive pronouns, expletives and names compared to the UD guidelines, which are utilized in the TLE. In particular, the EWT annotates possessive pronoun UPOS as PRON rather than DET, which leads the UPOS results in this setup to be lower than the PTB POS results. Improved results are obtained using the TLE training data, which, despite its smaller size, is closer in genre and syntactic characteristics to the TLE test set. The strongest PTB POS tagging and parsing results are obtained by combining the EWT with the TLE training data, yielding 95.77 POS accuracy and a UAS of 90.3 on the original version of the TLE test set. The dual annotation of sentences in their original and error corrected forms enables estimating the impact of grammatical errors on tagging and parsing by examining the performance gaps between the two sentence versions. Averaged across the three training conditions, the POS tagging accuracy on the original sentences is lower than the accuracy on the sentence corrections by 1.0 UPOS and 0.61 POS. Parsing performance degrades by 1.9 UAS, 1.59 LA and 2.21 LAS. To further elucidate the influence of grammatical errors on parsing quality, table 4 compares performance on tokens in the original sentences appearing inside grammatical error tags to those appearing outside such tags. Although grammatical errors may lead to tagging and parsing errors with respect to any element in the sentence, we expect erroneous tokens to be more challenging to analyze compared to grammatical tokens. This comparison indeed reveals a substantial difference between the two types of tokens, with an average gap of 5.0 UPOS, 6.65 POS, 4.67 UAS, 6.56 LA and 7.39 LAS. Note that differently from Tokens Train Set UPOS POS UAS LA LAS Ungrammatical EWT 87.97 88.61 82.66 82.66 74.93 Grammatical EWT 92.62 95.37 87.26 89.11 82.7 Ungrammatical TLEorig 90.76 88.68 83.81 83.31 77.22 Grammatical TLEorig 96.86 96.14 88.46 90.41 84.59 Ungrammatical EWT+TLEorig 89.76 90.97 86.32 85.96 80.37 Grammatical EWT+TLEorig 94.02 96.7 91.07 92.08 87.41 Table 4: Tagging and parsing results on the original version of the TLE test set for tokens marked with grammatical errors (Ungrammatical) and tokens not marked for errors (Grammatical). the global measurements in the first experiment, this analysis, which focuses on the local impact of remove/replace errors, suggests a stronger effect of grammatical errors on the dependency labels than on the dependency structure. Finally, we measure tagging and parsing performance relative to the fraction of sentence tokens marked with grammatical errors. Similarly to the previous experiment, this analysis focuses on remove/replace rather than insert errors. 0-5 (362) 5-10 (1033) 10-15 (1050) 15-20 (955) 20-25 (613) 25-30 (372) 30-35 (214) 35-40 (175) % of Original Sentence Tokens Marked as Grammatical Errors 76 78 80 82 84 86 88 90 92 94 96 98 100 Mean Per Sentence Score POS original POS corrected UAS original UAS corrected LAS original LAS corrected Figure 1: Mean per sentence POS accuracy, UAS and LAS of the Turbo tagger and Turbo parser, as a function of the percentage of original sentence tokens marked with grammatical errors. The tagger and the parser are trained on the EWT corpus, and tested on all 5,124 sentences of the TLE. Points connected by continuous lines denote performance on the original TLE sentences. Points connected by dashed lines denote performance on the corresponding error corrected sentences. The number of sentences whose errors fall within each percentage range appears in parenthesis. Figure 1 presents the average sentential performance as a function of the percentage of tokens in the original sentence marked with grammati743 cal errors. In this experiment, we train the parser on the EWT training set and test on the entire TLE corpus. Performance curves are presented for POS, UAS and LAS on the original and error corrected versions of the annotations. We observe that while the performance on the corrected sentences is close to constant, original sentence performance is decreasing as the percentage of the erroneous tokens in the sentence grows. Overall, our results suggest a negative, albeit limited effect of grammatical errors on parsing. This outcome contrasts a study by Geertzen et al. (2013) which reported a larger performance gap of 7.6 UAS and 8.8 LAS between sentences with and without grammatical errors. We believe that our analysis provides a more accurate estimate of this impact, as it controls for both sentence content and sentence length. The latter factor is crucial, since it correlates positively with the number of grammatical errors in the sentence, and negatively with parsing accuracy. 8 Related Work Previous studies on learner language proposed several annotation schemes for both POS tags and syntax (Hirschmann et al., 2007; Dıaz-Negrillo et al., 2010; Dickinson and Ragheb, 2013; Rosen et al., 2014). The unifying theme in these proposals is a multi-layered analysis aiming to decouple the observed language usage from conventional structures in the foreign language. In the context of ESL, Dıaz et al. (2010) propose three parallel POS tag annotations for the lexical, morphological and distributional forms of each word. In our work, we adopt the distinction between morphological word forms, which roughly correspond to our literal word readings, and distributional forms as the error corrected words. However, we account for morphological forms only when these constitute valid existing PTB POS tags and are contextually plausible. Furthermore, while the internal structure of invalid word forms is an interesting object of investigation, we believe that it is more suitable for annotation as word features rather than POS tags. Our treebank supports the addition of such features to the existing annotations. The work of Ragheb and Dickinson (2009; 2012; 2013) proposes ESL annotation guidelines for POS tags and syntactic dependencies based on the CHILDES annotation framework. This approach, called “morphosyntactic dependencies” is related to our annotation scheme in its focus on surface structures. Differently from this proposal, our annotations are grounded in a parallel annotation of grammatical errors and include an additional layer of analysis for the corrected forms. Moreover, we refrain from introducing new syntactic categories and dependency relations specific to ESL, thereby supporting computational treatment of ESL using existing resources for standard English. At the same time, we utilize a multilingual formalism which, in conjunction with our literal annotation strategy, facilitates linking the annotations to native language syntax. While the above mentioned studies focus on annotation guidelines, attention has also been drawn to the topic of parsing in the learner language domain. However, due to the shortage of syntactic resources for ESL, much of the work in this area resorted to using surrogates for learner data. For example, in Foster (2007) and Foster et al. (2008) parsing experiments are carried out on synthetic learner-like data, that was created by automatic insertion of grammatical errors to well formed English text. In Cahill et al. (2014) a treebank of secondary level native students texts was used to approximate learner text in order to evaluate a parser that utilizes unlabeled learner data. Syntactic annotations for ESL were previously developed by Nagata et al. (2011), who annotate an English learner corpus with POS tags and shallow syntactic parses. Our work departs from shallow syntax to full syntactic analysis, and provides annotations on a significantly larger scale. Furthermore, differently from this annotation effort, our treebank covers a wide range of learner native languages. An additional syntactic dataset for ESL, currently not available publicly, are 1,000 sentences from the EFCamDat dataset (Geertzen et al., 2013), annotated with Stanford dependencies (De Marneffe and Manning, 2008). This dataset was used to measure the impact of grammatical errors on parsing by comparing performance on sentences with grammatical errors to error free sentences. The TLE enables a more direct way of estimating the magnitude of this performance gap by comparing performance on the same sentences in their original and error corrected versions. Our comparison suggests that the effect of grammatical errors on parsing is smaller that the one reported in this study. 744 9 Conclusion We present the first large scale treebank of learner language, manually annotated and doublereviewed for POS tags and universal dependencies. The annotation is accompanied by a linguistically motivated framework for handling syntactic structures associated with grammatical errors. Finally, we benchmark automatic tagging and parsing on our corpus, and measure the effect of grammatical errors on tagging and parsing quality. The treebank will support empirical study of learner syntax in NLP, corpus linguistics and second language acquisition. 10 Acknowledgements We thank Anna Korhonen for helpful discussions and insightful comments on this paper. We also thank Dora Alexopoulou, Andrei Barbu, Markus Dickinson, Sue Felshin, Jeroen Geertzen, Yan Huang, Detmar Meurers, Sampo Pyysalo, Roi Reichart and the anonymous reviewers for valuable feedback on this work. This material is based upon work supported by the Center for Brains, Minds, and Machines (CBMM), funded by NSF STC award CCF-1231216. References Aoife Cahill, Binod Gyawali, and James V Bruno. 2014. Self-training for parsing learner text. In Proceedings of the First Joint Workshop on Statistical Parsing of Morphologically Rich Languages and Syntactic Analysis of Non-Canonical Languages, pages 66–73. Jacob Cohen. 1960. A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement, 20(1):37. David Crystal. 2003. English as a global language. Ernst Klett Sprachen. Marie-Catherine De Marneffe and Christopher D Manning. 2008. Stanford typed dependencies manual. Technical report, Technical report, Stanford University. Marie-Catherine De Marneffe, Timothy Dozat, Natalia Silveira, Katri Haverinen, Filip Ginter, Joakim Nivre, and Christopher D Manning. 2014. Universal stanford dependencies: A cross-linguistic typology. In Proceedings of LREC, pages 4585–4592. Ana Dıaz-Negrillo, Detmar Meurers, Salvador Valera, and Holger Wunsch. 2010. Towards interlanguage pos annotation for effective learner corpora in sla and flt. Language Forum, 36(1–2):139–154. Markus Dickinson and Marwa Ragheb. 2009. Dependency annotation for learner corpora. In Proceedings of the Eighth Workshop on Treebanks and Linguistic Theories (TLT-8), pages 59–70. Markus Dickinson and Marwa Ragheb. 2013. Annotation for learner English guidelines, v. 0.1. Technical report, Indiana University, Bloomington, IN, June. June 9, 2013. Jennifer Foster, Joachim Wagner, and Josef Van Genabith. 2008. Adapting a wsj-trained parser to grammatically noisy text. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers, pages 221–224. Association for Computational Linguistics. Jennifer Foster. 2007. Treebanks gone bad. International Journal of Document Analysis and Recognition (IJDAR), 10(3-4):129–145. Jeroen Geertzen, Theodora Alexopoulou, and Anna Korhonen. 2013. Automatic linguistic annotation of large scale l2 databases: The ef-cambridge open language database (efcamdat). In Proceedings of the 31st Second Language Research Forum. Somerville, MA: Cascadilla Proceedings Project. Hagen Hirschmann, Seanna Doolittle, and Anke L¨udeling. 2007. Syntactic annotation of noncanonical linguistic structures. Andr´e FT Martins, Miguel Almeida, and Noah A Smith. 2013. Turning on the turbo: Fast third-order non-projective turbo parsers. In ACL (2), pages 617–622. Citeseer. Ryan T McDonald, Joakim Nivre, Yvonne QuirmbachBrundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith B Hall, Slav Petrov, Hao Zhang, Oscar T¨ackstr¨om, et al. 2013. Universal dependency annotation for multilingual parsing. In ACL (2), pages 92–97. Citeseer. Ryo Nagata, Edward Whittaker, and Vera Sheinman. 2011. Creating a manually error-tagged and shallow-parsed learner corpus. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 1210–1219. Association for Computational Linguistics. Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christopher Bryant. 2014. The conll-2014 shared task on grammatical error correction. In CoNLL Shared Task, pages 1–14. Diane Nicholls. 2003. The cambridge learner corpus: Error coding and analysis for lexicography and elt. In Proceedings of the Corpus Linguistics 2003 conference, pages 572–581. 745 Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajiˇc, Christopher Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016. Universal dependencies v1: A multilingual treebank collection. In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016). Marwa Ragheb and Markus Dickinson. 2012. Defining syntax for learner language annotation. In COLING (Posters), pages 965–974. Victoria Ros´en and Koenraad De Smedt. 2010. Syntactic annotation of learner corpora. Systematisk, variert, men ikke tilfeldig, pages 120–132. Alexandr Rosen, Jirka Hana, Barbora ˇStindlov´a, and Anna Feldman. 2014. Evaluating and automating the annotation of a learner corpus. Language Resources and Evaluation, 48(1):65–92. Beatrice Santorini. 1990. Part-of-speech tagging guidelines for the penn treebank project (3rd revision). Technical Reports (CIS). Natalia Silveira, Timothy Dozat, Marie-Catherine de Marneffe, Samuel R Bowman, Miriam Connor, John Bauer, and Christopher D Manning. 2014. A gold standard dependency corpus for english. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC2014). Joel Tetreault, Jennifer Foster, and Martin Chodorow. 2010. Using parse features for preposition selection and error detection. In Proceedings of the acl 2010 conference short papers, pages 353–358. Association for Computational Linguistics. Helen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. A new dataset and method for automatically grading ESOL texts. In ACL, pages 180–189. 746
2016
70
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 747–755, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Extracting token-level signals of syntactic processing from fMRI - with an application to PoS induction Joachim Bingel Maria Barrett Anders Søgaard Centre for Language Technology, University of Copenhagen Njalsgade 140, 2300 Copenhagen S, Denmark {bingel, barrett, soegaard}@hum.ku.dk Abstract Neuro-imaging studies on reading different parts of speech (PoS) report somewhat mixed results, yet some of them indicate different activations with different PoS. This paper addresses the difficulty of using fMRI to discriminate between linguistic tokens in reading of running text because of low temporal resolution. We show that once we solve this problem, fMRI data contains a signal of PoS distinctions to the extent that it improves PoS induction with error reductions of more than 4%. 1 Introduction A few recent studies have tried to extract morphosyntactic signals from measurements of human sentence processing and used this information to improve NLP models. Klerke et al. (2016), for example, used eye-tracking recordings to regularize a sentence compression model. More related to this work, Barrett et al. (2016) recently used eyetracking recordings to induce PoS models. However, a weakness of eye-tracking data is that while eye movement surely does reflect the temporal aspect of cognitive processing, it is only a proxy of the latter and does not directly represent which processes take place in the brain. A recent neuro-imaging study suggests that concrete nouns and verbs elicit different brain signatures in the frontocentral cortex, and that concrete and abstract nouns elicit different brain activation patterns (Moseley and Pulverm¨uller, 2014). Also, for example, concrete verbs activate motor and premotor cortex more strongly than concrete nouns, and concrete nouns activate inferior frontal areas more strongly than concrete verbs. A decade earlier, Tyler et al. (2004) showed that the left inferior frontal gyrus was more strongly activated in processing regularly inflected verbs compared to regularly inflected nouns. Such studies suggest that different parts of our brains are activated when reading different parts of speech (PoS). This would in turn mean that neuro-images of readers carry information about the grammatical structure of what they read. In other words, neuro-imaging provides a partial, noisy annotation of the data with respect to morphosyntactic category. Say neuro-imaging data of readers was readily available. Would it be of any use to, for example, engineers interested in PoS taggers for lowresource languages? This is far from obvious. In fact, it is well-known that neuro-imaging data from reading is noisy, in part because the reading signal is not always very distinguishable (Tagamets et al., 2000), and also because the content of what we read may elicit certain activation in brain regions e.g. related to sensory processing (Boulenger et al., 2006; Gonz´alez et al., 2006). Other researchers such as Borowsky et al. (2013) have also questioned that there are differences, claiming to show that the majority of activation is shared between nouns and verbs – including in regions suggested by previous researchers as unique to either nouns or verbs. Berlingeri et al. (2008) argue that only verbs could be associated with unique regions, not nouns. In this paper we nevertheless explore this question. The paper should be seen as a proof of concept that interesting linguistic signals can be extracted from brain imaging data, and an attempt to show that learning NLP models from such data could be a way of pushing the boundaries of both fields. Contributions (a) We present a novel technique for extracting syntactic processing signal at the token level from neuro-imaging data that is charac747 Figure 1: Neural activity by brain region and type of information processed, as measured and rendered by Wehbe et al. (2014). terized by low temporal resolution. (b) We demonstrate that the fMRI data improves performance of a type-constrained, second order hidden Markov model for PoS induction. Our model leads to an error reduction of more than 4% in tagging accuracy despite very little training data, which to the best of our knowledge is the first positive result on weakly supervised part-of-speech induction from fMRI data in the literature. 2 fMRI Functional Magnetic Resonance Imaging (fMRI) is a technology for spatial visualization of brain activity. It measures the changes in oxygenation of the blood in the brain, often by use of the blood oxygenation level-dependent contrast (Ogawa et al., 1992), which correlates with neural activity. While the spatial resolution of fMRI is very high, its temporal resolution is low compared to other brain imaging technologies like EEG, which usually returns millisecond records of brain activity, but on the contrary have low spatial resolution. The temporal resolution of fMRI is usually between 0.5Hz and 1Hz. fMRI data contains representations of neural activity of millimeter-sized cubes called voxels. The high spatial resolution may enable us to detect fine differences in brain activation patterns, such as between processing nouns and verbs, but the low temporal resolution is a real challenge when the different tokens are processed serially and quickly after each other, as is the case in reading. Another inherent challenge when working with fMRI data is the lag between the the reaction to a stimulus and the point when it becomes visible through fMRI. This lag is called the hemodynamic response latency. While we know from brain imaging technologies with higher temporal resolution that the neural response to a stimuli happens within milliseconds, it only shows in fMRI data after a certain period of time, which further blurs the low temporal dimension of serial fMRI recordings. This latency has been studied as long as fMRI technology itself. It depends on the blood vessels and varies between e.g. voxels, brain regions, subjects, and tasks. A meta study of the hemodynamic response report latencies between 4 and 14 seconds in healthy adults, though latencies above 11 seconds are less typically reported (Handwerker et al., 2012). According to Handwerker et al. (2012), the precise response shape for a given stimulus and voxel region is hard to predict and remains a challenge when modeling temporal aspects of fMRI data. Figure 1 visualizes the neural activations in different brain regions as a reaction to the type of information that is processed during reading. See Price (2012) for a thorough review of fMRI language studies. Wehbe et al. (2014) presented a novel approach to fMRI studies of linguistic processing by studying a more naturalistic reading scenario, and modeling the entire process of reading and story understanding. They used data from 8 subjects reading contextualized, running text: a chapter from a Harry Potter book. The central benefit of this approach is that it allows studies of complex text processing closer to a real-life reading experience. Wehbe et al. (2014) used this data to train a comprehensive, generative model that—given a text passage—could predict the fMRI-recorded activity during the reading of this passage. Using the same data, our goal is to model a specific aspect of the story understanding process, i.e. the grammatical processing of words. 3 Data 3.1 Textual data We use the available fMRI recordings from Wehbe et al. (2014), where 8 adult, native English speakers read chapter 9 from Harry Potter and the Sorcerer’s Stone in English. The textual data as provided in the data set does not explicitly mark sentence boundaries, neither is punctuation sep748 Figure 2: Computation of token-level fMRI vectors from the original fMRI data for the first token “Harry” while accounting for hemodynamic response latency using a Gaussian sliding window over a certain time window (indicated by red horizontal line). The final fMRI vector for “Harry” (red box) is computed as specified in Equation 1. In this example, the time stamp t for the token is 20s and the time window stretches from t + 1s to t + 2.5s. arated from the tokens at the end of clauses and sentences. As the temporal alignment between tokens and fMRI recordings (see below) forbids us to detach punctuation marks from their preceding tokens and introduce them as new tokens, we opt to remove all punctuation from the data. In the same process, we use simple heuristics to detect sentence boundaries. Finally, we correct errors in sentence splitting manually. The chapter counts 4,898 tokens (excluding punctuation) and 1,411 types in 408 sentences. 3.2 fMRI data The fMRI data from the same data set is available as high-dimensional vectors of flattened thirdorder tensors, in which each component represents the blood-oxygen-level dependent contrast for a certain voxel in the three-dimensional fMRI image. The resolution of the image is at 3×3×3 mm, such that the brain activity for the eight subjects is represented by approximately 31,400 voxels on average (standard deviation is 3,607) depending on the size of their brain. This data is recorded every two seconds during the reading process, in which each token is consecutively displayed for 0.5 seconds on a screen inside the fMRI scanner. Prior to reading, the subjects are asked to focus on a cross displayed at the center of the screen in a warm-up phase of 20 seconds. The chapter is divided into four blocks, separated by additional concentration phases of 20 seconds. Furthermore, paragraphs are separated by a 0.5-seconds display of a cross at the center of the screen. As mentioned in the preceding section, punctuation marks were not displayed separately, but instead attached to the preceding token. This is arguably motivated through the attempt to create a reading scenario that is as natural as possible within the limitations of an fMRI recording. In similar fashion, contractions such as don’t or he’s were represented as one token, just as they appear in the original text. In order to make the data feasible for our HMM approach (see Section 4), we apply Principal Component Analysis (PCA) to the high-dimensional fMRI vectors. We initially tune the number of principal components, which we describe in Section 5. 3.2.1 Computing token-level fMRI vectors As outlined above, the time resolution of the fMRI recordings means that every block of four consecutive tokens is time-aligned with a single fMRI image. Naturally, this shared representation of consecutive tokens complicates any language learning at the token level. Furthermore, the hemodynamic response latency inherent to fMRI recordings entails that the image recorded while reading a certain token most probably does not give any clues about the mental state elicited by this stimulus. We therefore face the dual challenge of 1. inferring token-level information from supratoken recordings, and 2. identifying the lag after which the perceptual effects of reading a given token are visible. 749 zi-2 zi-1 zi xi-2 xi-1 xi Figure 3: Second-order HMM incorporating transitional probabilities from first and second-degree preceding states. We address this problem through the following procedure that we illustrate in Figure 2. First, we copy the number of fMRI recordings fourfold, such that every fMRI vector is aligned to exactly one token (excluding the vectors that are recorded while no token was displayed). The representation for a given token is then computed as a weighted average over all fMRI vectors that lie within a certain time window in relation to the token in question. Two consecutive tokens that originally lie within the same block of four thus receive different representations, provided that the window is large enough to transcend the border between two blocks. The fMRI representation for the token at time stamp t is given by vt = 1 |V | |V | X k=1 Vk · wk (1) where V is the series of fMRI vectors within the time window [t + s, t + e], and w is a Gaussian window of |V | points, with a standard deviation of 1. In factoring the Gaussian weight vector into the equation, we lend less weight to the fMRI recordings at the outset and at the end of the time window specified through s (start) and e (end). 4 Model We use a second-order hidden Markov model (HMM) with Wiktionary-derived type constraints (Li et al., 2012) as our baseline for weakly supervised PoS induction. We use the original implementation by Li et al. (2012). The model is a type-constrained, second order version of the first-order featurized HMM previously introduced by Berg-Kirkpatrick et al. (2010). In each state zi, a PoS HMM generates a sequence of words by consecutively generating word emissions xi and successor states zi+1. The emission probabilities and state transition probabilities are multinomial distributions over words and PoS. The joint probability of a word sequence and a tag sequence is Pθ(x, z) = Pθ(z1) Y i=1 Pθ(xi|zi) Y i=2 Pθ(zi|zi−1) (2) Following Berg-Kirkpatrick et al. (2010), the model calculates the probability distribution θ that parameterizes the emission probabilities as the output of a maximum entropy model, which enables unsupervised learning with a rich set of features. We thus let θxi,zi = exp(w⊺f(xi, zi)) P x′ exp(w⊺f(x′, zi)) (3) where w is a weight vector and f(xi, zi) is a feature function that will, in our case, consider the fMRI vectors vt that we computed in section 3.2.1 and a number of basic features that we adopt from the original model (Li et al., 2012). See Section 5 for details. In addition, we use a second-order HMM, first introduced for PoS tagging in Thede and Harper (1999), in which transitional probabilities are also considered for second-degree subsequent states (cf. figure 3). Here, the joint probability becomes Pθ(x, z) = Pθ(z1)Pθ(x1|z1)Pθ(z2|z1) Y i=2 Pθ(xi|zi) Y i=3 Pθ(zi|zi−2, zi−1) (4) In order to optimize the HMM (including the weight vector w), the model uses the EM algorithm as applied for feature-rich, locally normalized models introduced in Berg-Kirkpatrick et al. (2010), with the important modification that we use type constraints in the E-step, following Li et al. (2012). Specifically, for each state zi, the emission probability P(xi|zi) is initialized randomly for every word type associated with zi in our tag dictionary (the type constraints). This weakly supervised setup allows us to predict the actual PoS tags instead of abstract states. The Mstep is solved using L-BFGS (Liu and Nocedal, 1989) 750 1 2 3 4 5 6 7 8 Subject ID 70 71 72 73 74 75 76 77 78 79 Dev set accuracy Figure 4: Accuracy on the development set for the different subjects when trained and tested on fMRI data from only this one subject. Dashed line is the development set baseline. Only in one out of eight cases does adding fMRI features lead to worse performance. EM-HMM Parameters We use the same setting as Li et al. (2012) for the number of EM iterations, fixing this parameter to 30 for all experiments. 5 Experiments Experimental setup From the neuro-imaging dataset described above, we use 41 sentences (720 tokens) as a development set and 41 sentences (529 tokens) as a test set, and the remaining 326 sentences (corresponding to 80%) for training our model. Basic features The basic features of all the models (except when explicitly stated otherwise) are based on seven features that we adopt from Li et al. (2012), capturing word form, hyphenation, suffix patterns, capitalization and digits in the token. Wiktionary Of the 1,411 word types in the corpus, we find that 1,381 (97.84%) are covered by the Wiktionary dump made available by Li et al. (2012),1 which we use as our type constraints when inducing our models. 5.1 Part-of-speech annotation Though Wehbe et al. (2014) also provide syntactic information, these are automatic parses that are not suitable for the evaluation of our model. The development and test data are therefore manually 1https://code.google.com/archive/p/ wikily-supervised-pos-tagger/ annotated for universal part-of-speech tags (Petrov et al., 2011) by two linguistically trained annotators. The development set was annotated by both annotators, who reached an inter-annotator agreement of 0.926 in accuracy and 0.928 in weighted F1. For the final development and test data, disagreements were resolved by the annotators. 5.2 Non-fMRI baselines Our first baseline is a second-order HMM with type constraints from Wiktionary; this in all respects the model proposed by Liu et al. (2012), except trained on our small Harry Potter corpus. In a second baseline model, we also incorporate 300dimensional GloVe word embeddings trained on Wikipedia and the Gigaword corpus (Pennington et al., 2014). We also test a version of the baseline without the basic features to get an estimate of the contribution of this aspect of the setup. 5.3 Token-level fMRI We run a series of experiments with token-level fMRI vectors that we obtain as described in Section 3.2.1. Initially, we train separate models for each of the eight individual subjects, whose performance on the development data are illustrated in Figure 4. 5.3.1 Tuning hyperparameters We tune the following hyperparameters on the token-level development set in the following order: the number of subjects to use, the number of principal components per subject, and the time window. For the earlier tuning processes we fix the later hyperparameters to values we consider reasonable, but once we have tuned a hyperparameter, we use the best value from this tuning process for later tuning steps. The initial values are: 10 principal components and a time window of [t + 0s, t + 6s]. Number of subjects To reduce the chance of overfitting, we use fMRI data from several readers in our model. The data from Wehbe et al. (2014) would in theory allow us to average the three-dimensional image space for any number of readers, but this is not feasible if only for the difference in brain sizes between the subjects. It is not feasible, either, to average over the eigenvectors that we obtain from PCA, as the eigenvectors between subjects do not share the same (or any concrete) feature space. We therefore concatenate 751 1 2 3 4 5 6 7 8 Number of subjects 75.0 75.5 76.0 76.5 77.0 77.5 Dev set accuracy (a) Learning curve for increasing number of subjects in the model. Fixed hyper-parameters: 10 principal components and a time window of [t + 0s, t + 6s]. 0 10 20 30 40 50 Number of principal components 74.0 74.5 75.0 75.5 76.0 76.5 77.0 77.5 Dev set accuracy (b) Learning curve for increasing number of principal components per subject in the model. Number of principal components ∈ {1, 2, 3, 4, 5, 10, 15, 20, 50} Fixed hyperparameters: 8 subjects, a time window of [t + 0s, t + 6s]. Figure 5: Exploring two individual hyper-parameters of the model on development set. Dashed lines indicate the development set baseline. the eigenvectors that we obtain for different subjects, such that the feature vectors grow in length as the number of included subjects increases. As Figure 5a shows, exploring an increasing number of subjects in the model does not seem to a have consistent effect on development set accuracy. However, we expect an increased robustness from a model that incorporates a greater number of subjects. In all following experiments we therefore use data from all eight readers, but we would also expect a model with fewer subjects to perform reasonably. Principal components Fixing the number of subjects to eight, we then perform experiments to determine the number of principal components per subject to consider in our model, whose results are visualized in Figure 5b. We observe the first eigenvectors carry a strong signal, while a great number of principal components tends to water down the signal and lead to worse performance. We choose to continue using 10 dimensions in all further experiments. Time window for token vectors We next run experiments to determine the optimal time window for the computation of the token vectors, using different combinations of start and end times in relation to the token time stamps, but keeping the number of subjects and principal components constant at eight and ten, respectively. These experiments yield three different time windows with an equally good performance on the development set: [t −4s, t + 10s], [t + 2s, t + 8s] and [t + 0s, t + 6s]. Note that due to the Gaussian weighting the centre of the interval gets more weight than the edges and that [t−4s, t+10s] and [t+0s, t+6s] have the same centre, t + 3. While [t + 2s, t + 8s] and [t+0, t+6] align better with psycholinguistic expectations, [t −4s, t + 10s] makes our model less prone to overfitting. We therefore select the model averaging over the largest time window. 5.4 Type-level fMRI aggregates Next, we aggregate token vectors to compute their type-level averages, in an effort to explore to which degree neural activity is dependent on the read word type rather than the concrete grammatical environment, and whether this can allow our model to draw conclusions about the grammatical class of a token. We compute the type-level aggregates as the component-wise arithmetic mean of the token vectors that we extract using the parameter settings optimized above. Note, however, that out of the 4,898 tokens in the text, 823 (16.9%) occur only once. 6 Results Table 1 reports the results that we obtain with our final hyper-parameter settings, which are as follows: Number of subjects 8 Principal components 10 Start of time window t −4s End of time window t + 10s The results show that our model leads to a consid752 Accuracy Baseline (Li et al., 2012) 69.57 Baseline+GloVe 69.38 Baseline w/o basic feats 55.53 fMRI (token-level) w/o basic feats 56.99 fMRI (type-level) 70.32 fMRI (token-level) 70.89 Error reduction over baseline 04.34 Table 1: Tagging accuracy on test data for the different models. The fMRI model is significantly better than the baseline (p = 0.014, Bootstrap). Class Prec. Rec. F1 ± BL ADJ 37.50 42.86 40.00 +2.71 ADP 83.67 77.36 80.39 +1.54 ADV 66.00 58.93 62.26 +5.69 CONJ 70.97 70.97 70.97 ±0.00 DET 80.49 80.49 80.49 +3.38 NOUN 70.37 76.00 73.08 +0.28 NUM 00.00 00.00 00.00 -20.00 PRON 88.68 74.60 81.03 +4.76 PRT 41.67 41.67 41.67 +11.67 VERB 74.36 76.32 75.32 -0.95 Table 2: Test data tagging performance by part-ofspeech class for the best fMRI model. The rightmost column displays the difference in F1 compared to the baseline model. erable error reduction over the baseline model as well as the embeddings-enriched baseline model. It also outperforms the model which uses typelevel averages over the fMRI recordings. Leaving out the basic features hurts performance, but even without the basic features the fMRI data can reduce error with 3.28% on the test set. In Table 2 we present the performance on the individual PoS classes under our best model. 7 Analysis and Discussion 7.1 What’s in the fMRI vectors? t-SNE (Van der Maaten and Hinton, 2008) is a powerful supervised dimensionality reduction tool for visualizing high-dimensional data in twodimensional space using Stochastic Neighbor Embedding. In Figure 6, we visualize pairs of PoS classes of the test data in a two-dimensional re15 10 5 0 5 10 15 20 30 20 10 0 10 20 NOUN PRON (a) NOUN and PRON 20 10 0 10 20 15 10 5 0 5 10 15 VERB ADP (b) VERB and ADP 40 20 0 20 40 40 20 0 20 40 ADP ADJ (c) ADP and ADJ Figure 6: Selected t-SNE visualizations of fMRI vectors for all tokens of a class of the test set. The visualizations show that datapoints of a PoS class tend to cluster in the fMRI vector space. 753 duction of the embedding space obtained when using the settings of the best fMRI model. The fact that we can discriminate reasonably well between, e.g., nouns and pronouns, verbs and adpositions, as well as adpositions and adjectives on the basis of fMRI data is to the best of our knowledge a new finding. 7.2 Discussion of the results We showed that by careful model tuning and design it is possible to extract a signal of grammatical processing in the brain from fMRI. The figures that we present in Table 1 reflect, to our knowledge, the first successful results in inferring grammatical function at the token level from fMRI data. Our best model, which we train on the ten principal components from the fMRI recordings of eight readers, achieves an error reduction of over 4% despite a very small amount of training data. We find that our best model uses a very wide window of fMRI recordings to compute the representations for individual tokens, considering all recordings from 4 seconds before the token is displayed until 10 seconds after the token is displayed. Our best explanation for why the incorporation of preceding fMRI measurements is beneficial to our model, is that the grammatical function of a token may be predictable from a reader’s cognitive state while reading preceding tokens. However, note that the measurements at the far ends of the time window only factor into the token vector to a small degree as a consequence of the Gaussian weighting. Our experiments further suggest that using token-level information instead of type-level features, such as word embeddings or type averages of fMRI vectors, is helpful for PoS induction that already is type-constrained. Recently, Huth et al. (2016) found that semantically related words are processed in the same area of the brain. Open questions for future work include whether there is a bigger potential for using fMRI data for semantic rather than syntactic NLP tasks, and whether the signal we find mainly stems from semantic processing differences. 8 Conclusion This paper presents the first experiments inducing part of speech from fMRI reading data. Cognitive psychologists have debated whether grammatical differences lead to different brain activation patterns. Somewhat surprisingly, we find that 1 5 10 15 20 25 30 Number of iterations 68 69 70 71 72 73 74 75 76 77 Dev. tagging accuracy full baseline Figure 7: Learning curve of tagging accuracy on the development set as a function of different number of EM iterations for baseline model and the full model for iteration numbers ∈[1, 30]. Fixed hyper-parameters: 8 subjects, 10 principal components, and a time window of t −4s to t + 10s the fMRI data contains a strong signal, enabling a 4% error reduction over a state-of-the-art unsupervised PoS tagger. While our approach may not be readily applicable for developing NLP models today, we believe that the presented results may inspire NLP researchers to consider learning models from combinations of linguistic resources and auxiliary, behavioral data that reflects human cognition. Acknowledgements This research was partially funded by the ERC Starting Grant LOWLANDS No. 313695, as well as by Trygfonden. Supplementary material Number of EM iterations As supplementary material, we present the EM learning curve in Figure 7, which shows a steep learning curve at the beginning and relatively stable performance figures after 15 iterations for the full model and 10 iterations for the baseline model. References Maria Barrett, Joachim Bingel, Frank Keller, and Anders Søgaard. 2016. Weakly supervised part-ofspeech induction using eye-tracking data. In ACL. Taylor Berg-Kirkpatrick, Alexandre Bouchard-Cote, John DeNero, and Dan Klein. 2010. Painless unsupervised learning with features. In Proceedings of NAACL, pages 582–590. 754 Manuela Berlingeri, Davide Crepaldi, Rossella Roberti, Giuseppe Scialfa, Claudio Luzzatti, and Eraldo Paulesu. 2008. Nouns and verbs in the brain: Grammatical class and task specific effects as revealed by fMRI. Cognitive Neuropsychology, 25(4):528–558. Ron Borowsky, Carrie Esopenko, Layla Gould, Naila Kuhlmann, Gordon Sarty, and Jacqueline Cummine. 2013. Localisation of function for noun and verb reading: converging evidence for shared processing from fmri activation and reaction time. Language and Cognitive Processes, 28(6):789–809. V´eronique Boulenger, Alice C Roy, Yves Paulignan, Viviane Deprez, Marc Jeannerod, and Tatjana A Nazir. 2006. Cross-talk between language processes and overt motor behavior in the first 200 msec of processing. Journal of cognitive neuroscience, 18(10):1607–1615. Julio Gonz´alez, Alfonso Barros-Loscertales, Friedemann Pulverm¨uller, Vanessa Meseguer, Ana Sanju´an, Vicente Belloch, and C´esar ´Avila. 2006. Reading cinnamon activates olfactory brain regions. Neuroimage, 32(2):906–912. Daniel A Handwerker, Javier Gonzalez-Castillo, Mark D’Esposito, and Peter A Bandettini. 2012. The continuing challenge of understanding and modeling hemodynamic variation in fmri. Neuroimage, 62(2):1017–1023. Alexander G Huth, Wendy A de Heer, Thomas L Griffiths, Fr´ed´eric E Theunissen, and Jack L Gallant. 2016. Natural speech reveals the semantic maps that tile human cerebral cortex. Nature, 532(7600):453– 458. Sigrid Klerke, Yoav Goldberg, and Anders Søgaard. 2016. Improving sentence compression by learning to predict gaze. In NAACL. Shen Li, Jo˜ao Grac¸a, and Ben Taskar. 2012. Wikily supervised part-of-speech tagging. In EMNLP, pages 1389–1398. Dong C Liu and Jorge Nocedal. 1989. On the limited memory bfgs method for large scale optimization. Mathematical programming, 45(1-3):503–528. Xiaohua Liu, Ming Zhou, Furu Wei, Zhongyang Fu, and Xiangyang Zhou. 2012. Joint inference of named entity recognition and normalization. In ACL, pages 526–535. Rachel L Moseley and Friedemann Pulverm¨uller. 2014. Nouns, verbs, objects, actions, and abstractions: local fmri activity indexes semantics, not lexical categories. Brain and language, 132:28–42. Seiji Ogawa, David W Tank, Ravi Menon, Jutta M Ellermann, Seong G Kim, Helmut Merkle, and Kamil Ugurbil. 1992. Intrinsic signal changes accompanying sensory stimulation: functional brain mapping with magnetic resonance imaging. Proceedings of the National Academy of Sciences, 89(13):5951–5955. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP, volume 14, pages 1532– 1543. Slav Petrov, Dipanjan Das, and Ryan McDonald. 2011. A universal part-of-speech tagset. CoRR abs/1104.2086. Cathy J Price. 2012. A review and synthesis of the first 20years of pet and fmri studies of heard speech, spoken language and reading. Neuroimage, 62(2):816– 847. M-A Tagamets, Jared M Novick, Maria L Chalmers, and Rhonda B Friedman. 2000. A parametric approach to orthographic processing in the brain: an fmri study. Cognitive Neuroscience, Journal of, 12(2):281–297. Scott Thede and Mary Harper. 1999. A second-order hidden markov model for part-of-speech tagging. In ACL, pages 175–182. Lorraine Tyler, Peter Bright, Paul Fletcher, and Emmanuel Stamatakis. 2004. Neural processing of nouns and verbs: The role of inflectional morphology. Neuropsychologia, 42(4):512–523. Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of Machine Learning Research, 9(2579-2605):85. Leila Wehbe, Brian Murphy, Partha Talukdar, Alona Fyshe, Aaditya Ramdas, and Tom Mitchell. 2014. Simultaneously uncovering the patterns of brain regions involved in different story reading subprocesses. PloS one, 9(11):e112575. 755
2016
71
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 756–765, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Bidirectional Recurrent Convolutional Neural Network for Relation Classification Rui Cai, Xiaodong Zhang and Houfeng Wang∗ Key Laboratory of Computational Linguistics (Ministry of Education), School of EECS, Peking University, Beijing, 100871, China Collaborative Innovation Center for Lanuage Ability, Xuzhou, Jiangsu, 221009, China {cairui, zxdcs, wanghf}@pku.edu.cn Abstract Relation classification is an important semantic processing task in the field of natural language processing (NLP). In this paper, we present a novel model BRCNN to classify the relation of two entities in a sentence. Some state-of-the-art systems concentrate on modeling the shortest dependency path (SDP) between two entities leveraging convolutional or recurrent neural networks. We further explore how to make full use of the dependency relations information in the SDP, by combining convolutional neural networks and twochannel recurrent neural networks with long short term memory (LSTM) units. We propose a bidirectional architecture to learn relation representations with directional information along the SDP forwards and backwards at the same time, which benefits classifying the direction of relations. Experimental results show that our method outperforms the state-of-theart approaches on the SemEval-2010 Task 8 dataset. 1 Introduction Relation classification aims to classify the semantic relations between two entities in a sentence. For instance, in the sentence “The [burst]e1 has been caused by water hammer [pressure]e2”, entities burst and pressure are of relation CauseEffect(e2, e1). Relation classification plays a key role in robust knowledge extraction, and has become a hot research topic in recent years. Nowadays, deep learning techniques have made significant improvement in relation classification, ∗Corresponding author compared with traditional relation classification approaches focusing on designing effective features (Rink and Harabagiu, 2010) or kernels (Zelenko et al., 2003; Bunescu and Mooney, 2005) Although traditional approaches are able to exploit the symbolic structures in sentences, they still suffer from the difficulty to generalize over the unseen words. Some recent works learn features automatically based on neural networks (NN), employing continuous representations of words (word embeddings). The NN research for relation classification has centered around two main network architectures: convolutional neural networks and recursive/recurrent neural networks. Convolutional neural network aims to generalize the local and consecutive context of the relation mentions, while recurrent neural networks adaptively accumulate the context information in the whole sentence via memory units, thereby encoding the global and possibly unconsecutive patterns for relation classification. Socher et al. (2012) learned compositional vector representations of sentences with a recursive neural network. Kazuma et al. (2013) proposed a simple customizaition of recursive neural networks. Zeng et al. (2014) proposed a convolutional neural network with position embeddings. Recently, more attentions have been paid to modeling the shortest dependency path (SDP) of sentences. Liu et al. (2015) developed a dependency-based neural network, in which a convolutional neural network has been used to capture features on the shortest path and a recursive neural network is designed to model subtrees. Xu et al. (2015b) applied long short term memory (LSTM) based recurrent neural networks (RNNs) along the shortest dependency path. However, SDP is a special structure in which every two neighbor words are separated by a dependency relations. Previous works treated dependency relations in the same 756 Figure 1: The shortest dependency path representation for an example sentence from SemEval-08. way as words or some syntactic features like partof-speech (POS) tags, because of the limitations of convolutional neural networks and recurrent neural networks. Our first contribution is that we propose a recurrent convolutional neural network (RCNN) to encode the global pattern in SDP utilizing a two-channel LSTM based recurrent neural network and capture local features of every two neighbor words linked by a dependency relation utilizing a convolution layer. We further observe that the relationship between two entities are directed. For instance, Figure 1 shows that the shortest path of the sentence “The [burst]e1 has been caused by water hammer [pressure]e2.” corresponds to relation CauseEffect(e2, e1). The SDP of the sentence also corresponds to relation Cause-Effect(e2, e1), where e1 refers to the entity at front end of SDP and e2 refers to the entity at back end of SDP, and the inverse SDP corresponds to relation Cause-Effect(e1, e2). Previous work (Xu et al., 2015b) simply transforms a (K+1)-relation task into a (2K + 1) classification task, where 1 is the Other relation and K is the number of directed relations. Besides, the recurrent neural network is a biased model, where later inputs are more dominant than earlier inputs. It could reduce the effectiveness when it is used to capture the semantics of a whole shortest dependency path, because key components could appear anywhere in a SDP rather than the end. Our second contribution is that we propose a bidirectional recurrent convolutional neural networks (BRCNN) to learn representations with bidirectional information along the SDP forwards and backwards at the same time, which also strengthen the ability to classifying directions of relationships between entities. Experimental results show that the bidirectional mechanism significantly improves the performance. We evaluate our method on the SemEval-2010 relation classification task, and achieve a state-ofthe-art F1-score of 86.3%. 2 The Proposed Method In this section, we describe our method in detail. Subsection 2.1 provides an overall picture of our BCRNN model. Subsection 2.2 presents the rationale of using SDPs and some characteristics of SDP. Subsection 2.3 describes the two-channel recurrent neural network, and bidirectional recurrent convolutional neural network is introduced in Subsection 2.4. Finally, we present our training objective in Subsection 2.5. 2.1 Framework Our BCRNN model is used to learn representations with bidirectional information along the SDP forwards and backwards at the same time. Figure 2 depicts the overall architecture of the BRCNN model. Given a sentence and its dependency tree, we build our neural network on its SDP extracted from the tree. Along the SDP, two recurrent neural networks with long short term memory units are applied to learn hidden representations of words and dependency relations respectively. A convolution layer is applied to capture local features from hidden representations of every two neighbor words and the dependency relations between them. A max pooling layer thereafter gathers information from local features of the SDP or the inverse SDP. We have a softmax output layer after pooling layer for classification in the unidirectional model RCNN. On the basis of RCNN model, we build a bidirectional architecture BRCNN taking the SDP and the inverse SDP of a sentence as input. During the training stage of a (K+1)-relation task, 757 two fine-grained so ftmax classifiers of RCNNs do a (2K + 1)-class classification respectively. The pooling layers of two RCNNs are concatenated and a coarse-grained so ftmax output layer is followed to do a (K + 1)-class classification. The final (2K+1)-class distribution is the combination of two (2K+1)-class distributions provided by finegrained classifiers respectively during the testing stage. 2.2 The Shortest Dependency Path If e1 and e2 are two entities mentioned in the same sentence such that they are observed to be in a relationship R, the shortest path between e1 and e2 condenses most illuminating information for the relationship R(e1, e2). It is because (1) if entities e1 and e2 are arguments of the same predicate, the shortest path between them will pass through the predicate; (2) if e1 and e2 belong to different predicate-argument structures that share a common argument, the shortest path will pass through this argument. Bunescu and Mooney (2005) first used shortest dependency paths between two entities to capture the predicate-argument sequences, which provided strong evidence for relation classification. Xu et al. (2015b) captured information from the sub-paths separated by the common ancestor node of two entities in the shortest paths. However, the shortest dependency path between two entities is usually short (∼4 on average) , and the common ancestor of some SDPs is e1 or e2, which leads to imbalance of two sub-paths. We observe that, in the shortest dependency path, each two neighbor words wa and wb are linked by a dependency relation rab. The dependency relations between a governing word and its children make a difference in meaning. Besides, if we inverse the shortest dependency path, it corresponds to the same relationship with an opposite direction. For example , in Figure 1, the shortest path is composed of some sub-structure like “burst nsub jpass −−−−−−−−→caused”. Following the above intuition, we design a bidirectional recurrent convolutional neural network, which can capture features from the local substructures and inversely at the same time. 2.3 Two-Channel Recurrent Neural Network with Long Short Term Memory Units The recurrent neural network is suitable for modeling sequential data, as it keeps hidden state vector h, which changes with input data at each step accordingly. We make use of words and dependency relations along the SDP for relations classification (Figure 2). We call them channels as these information sources do not interact during recurrent propagation. Each word and dependency relation in a given sentence is mapped to a real-valued vector by looking up in a embedding table. The embeddings of words are trained on a large corpus unsupervisedly and are thought to be able to capture their syntactic and semantic information, and the embeddings of dependency relations are initialized randomly. The hidden state ht, for the t-th input is a function of its previous state ht−1 and the embedding xt of current input. Traditional recurrent networks have a basic interaction, that is, the input is linearly transformed by a weight matrix and nonlinearly squashed by an activation function. Formally, we have ht = f(Win · xt + Wrec · ht−1 + bh) (1) where Win and Wrec are weight matrices for the input and recurrent connections, respectively. bh is a bias term for the hidden state vector, and f a non-linear activation function. It was difficult to train RNNs to capture longterm dependencies because the gradients tend to either vanish or explode. Therefore, some more sophisticated activation function with gating units were designed. Long short term memory units are proposed in Hochreiter and Schmidhuber (1997) to overcome this problem. The main idea is to introduce an adaptive gating mechanism, which decides the degree to which LSTM units keep the previous state and memorize the extracted features of the current data input. Many LSTM variants have been proposed. We adopt in our method a variant introduced by Zaremba and Sutskever (2014). Concretely, the LSTM-based recurrent neural network comprises four components: an input gate it, a forget gate ft, an output gate ot, and a memory cell ct. First, we compute the values for it, the input gate, and gt the candidate value for the states of 758 burst nsubjpass caused prep by pobj pressure LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM Input Lookup Table Pool Two-channel LSTM Two-channel LSTM Convolution Convolution fine-grained softmax fine-grained softmax coarse-graind softmax forwards backwards Figure 2: The overall architecture of BRCNN. Two-Channel recurrent neural networks with LSTM units pick up information along the shortest dependency path, and inversely at the same time. Convolution layers are applied to extract local features from the dependency units. the memory cells at time t: it = σ(Wi · xt + Ui · ht−1 + bi) (2) gt = tanh(Wc · xt + Uc · ht−1 + bc) (3) Second, we compute the value for ft, the activations of the memory cells’ forget gates at time t: ft = σ(W f · xt + U f · ht−1 + bf ) (4) Given the value of the input gate activations it, the forget gate activation ft and the candidate state value gt, we can compute ct the memory cells’ new state at time t: ct = it ⊗gt + ft ⊗ct−1 (5) With the new state of the memory cells, we can compute the value of their output gates and, subsequently, their outputs: ot = σ(Wo · xt + Uo · ht−1 + bo) (6) ht = ot ⊗tanh(ct) (7) In the above equations, σ denotes a sigmoid function; ⊗denotes element-wise multiplication. 2.4 Bidirectional Recurrent Convolutional Neural Network We observe that a governing word wa and its children wb are linked by a dependency relation rab, which makes a difference in meaning. For example, “kills nsub j −−−−→it” is distinct from “kills dobj −−−→ it”. The shortest dependency path is composed of many substructures like “wa rab −−→wb”, which are hereinafter referred to as “dependency unit”. Hidden states of words and dependency relations in the SDP are obtained, utilizing two-channel recurrent neural network. The hidden states of wa, wb and rab are ha, hb and h′ ab, and the hidden state of the dependency unit dab is [ha ⊕h′ ab ⊕hb], where ⊕denotes concatenate operation. Local features Lab for the dependency unit dab can be extracted, utilizing a convolution layer upon the two-channel recurrent neural network . Formally, we have Lab = f(Wcon · [ha ⊕h′ ab ⊕hb] + bcon) (8) where Wcon is the weight matrix for the convolution layer and bcon is a bias term for the hidden state vector. f is a non-linear activation function(tanh is used in our model). A pooling layer thereafter gather global information G from 759 local features of dependency units, which is defined as G = D max d=1 Ld (9) where the max function is an element-wise function, and D is the number of dependency units in the SDP. The advantage of two-channel recurrent neural network is the ability to better capture the contextual information, adaptively accumulating the context information the whole path via memory units. However, the recurrent neural network is a biased model, where later inputs are more dominant than earlier inputs. It could reduce the effectiveness when it is used to capture features for relation classification, for the entities are located at both ends of SDP and key components could appear anywhere in a SDP rather than at the end. We tackle the problem with Bidirectional Convolutional Recurrent Neural Network. On the basis of observation, we make a hypothesis that SDP is a symmetrical structure. For example, if there is a forward shortest path −→S which corresponds to relation Rx(e1, e2), the backward shortest path ←−S can be obtained by inversing −→S , and ←−S corresponds to Rx(e2, e1), and both −→S and ←−S correspond to relation Rx. As shown in Figure 2, two RCNNs pick up information along −→S and ←−S , obtaining global representations −→G and ←−G. A representation with bidirectional information is obtained by concatenating −→G and ←−G . A coarse-grained so ftmax classifier is used to predict a (K+1)-class distribution y. Formally, y = so ftmax(Wc · [←−G ⊕−→G] + bc) (10) Where Wc is the transformation matrix and bc is the bias vector. Coarse-grained classifier makes use of representation with bidirectional information ignoring the direction of relations, which learns the inherent correlation between the same directed relations with opposite directions, such as Rx(e1, e2) and Rx(e2, e1). Two fine-grained so ftmax classifiers are applied to −→G and ←−G with linear transformation to give the (2K+1)-class distribution −→y and ←−y respectively. Formally, −→y = so ftmax(W f · −→G + bf ) (11) ←−y = so ftmax(W f · ←−G + bf ) (12) where Wf is the transformation matrix and bf is the bias vector. Classifying −→S and ←−S respecitvely at the same time can strengthen the model ability to judge the direction of relations. 2.5 Training Objective The (K + 1)-class softmax classifier is used to estimate probability that −→S and ←−S are of relation R . The two (2K + 1)-class softmax classifiers are used to estimate the probability that −→S and ←−S are of relation −→R and ←−R respectively. For a single data sample, the training objective is the penalized cross-entropy of three classifiers, given by J = 2K+1 X i=1 −→t i log −→y i + 2K+1 X i=1 ←−t i log ←−y i + K X i=1 ti log yi + λ · ||θ||2 (13) where t ∈RK+1, −→t and ←−t ∈R2K+1, indicating the one-hot represented ground truth. y, −→y and ←−y are the estimated probabilities for each class described in section 2.4. θ is the set of model parameters to be learned, and λ is a regularization coefficient. For decoding (predicting the relation of an unseen sample), the bidirectional model provides the (2K+1)-class distribution −→y and ←−y . The final (2K+1)-class distribution ytest becomes the combination of −→y and ←−y . Formally, ytest = α · −→y + (1 −α) · z(←−y ) (14) where α is the fraction of the composition of distributions, which is set to the value 0.65 according to the performance on validation dataset. During the implementation of BRCNN, elements in two class distributions at the same position are not corresponding, e.g. Cause-Effect(e1, e2) in −→y should correspond to Cause-Effect(e2, e1) in ←−y . We apply a function z to transform ←−y to a corresponding forward distribution like −→y . 3 Experiments 3.1 Dataset We evaluated our BRCNN model on the SemEval2010 Task 8 dataset, which is an established benchmark for relation classification (Hendrickx et al., 2010). The dataset contains 8000 sentences for training, and 2717 for testing. We split 800 samples out of the training set for validation. 760 Classifier Additional Information F1 SVM POS, WordNet, Prefixes and other morphological features, 82.2 (Rink and Harabagiu, 2010) dependency parse, Levin classed, PropBank, FanmeNet, NomLex-Plus, Google n-gram, paraphrases, TextRunner RNN Word embeddings 74.8 (Socher et al., 2011) + POS, NER, WordNet 77.6 MVRNN Word embeddings 79.1 (Socher et al., 2012) + POS, NER, WordNet 82.4 CNN Word embeddings 69.7 (Zeng et al., 2014) + word position embeddings, WordNet 82.7 FCM Word embeddings 80.6 (Yu et al., 2014) + dependency parsing, NER 83.0 CR-CNN Word embeddings 82.8 (dos Santos et al., 2015) + word position embeddings 84.1 SDP-LSTM Word embeddings 82.4 (Xu et al., 2015b) + POS + GR + WordNet embeddings 83.7 DepNN Word embeddings, WordNet 83.0 (Liu et al., 2015) Word embeddings, NER 83.6 depLCNN Word embeddings, WordNet, word around nominals 83.7 (Xu et al., 2015a) + negative sampling from NYT dataset 85.6 BRCNN Word embeddings 85.4 (Our Model) + POS, NER, WordNet embeddings 86.3 Table 1: Comparison of relation classification systems. The dataset has (K+1)=10 distinguished relations, as follows. • Cause-Effect • Component-Whole • Content-Container • Entity-Destination • Entity-Origin • Message-Topic • Member-Collection • Instrument-Agency • Product-Agency • Other The former K=9 relations are directed, whereas the Other class is undirected, we have (2K+1)=19 different classes for 10 relations. All baseline systems and our model use the official macroaveraged F1-score to evaluate model performance. This official measurement excludes the Other relation. 3.2 Hyperparameter Settings In our experiment, word embeddings were 200dimensional as used in (Yu et al., 2014), trained on Gigaword with word2vec (Mikolov et al., 2013). Embeddings of relation are 50-dimensional and initialized randomly. The hidden layers in each channel had the same number of units as their embeddings (200 or 50). The convolution layer was 200-dimensional. The above values were chosen according to the performance on the validation dataset. As we can see in Figure 1, dependency relation r “ prep −−−→” in −→S becomes r−1 “ prep ←−−−” in ←−S . Experiment results show that, the performance of BRCNN is improved if r and r−1 correspond to different relations embeddings rather than a same embedding. We notice that dependency relations contain much fewer symbols than the words contained in the vocabulary, and we initialize the embeddings of dependency relations randomly for they can be adequately tuned during supervised training. We add l2 penalty for weights with coefficient 10−5, and dropout of embeddings with rate 0.5. We applied AdaDelta for optimization (Zeiler, 2012), where gradients are computed with an adaptive learning rate. 761 3.3 Results Table 1 compares our BRCNN model with other state-of-the-art methods. The first entry in the table presents the highest performance achieved by traditional feature-based methods. Rink and Harabagiu. (2010) fed a variety of handcrafted features to the SVM classifier and achieve an F1score of 82.2%. Recent performance improvements on this dataset are mostly achieved with the help of neural networks. Socher et al. (2012) built a recursive neural network on the constituency tree and achieved a comparable performance with Rink and Harabagiu. (2010). Further, they extended their recursive network with matrix-vector interaction and elevated the F1 to 82.4%. Xu et al. (2015b) first introduced a type of gated recurrent neural network (LSTM) into this task and raised the F1score to 83.7%. From the perspective of convolution, Zeng et al. (2014) constructed a CNN on the word sequence; they also integrated word position embeddings, which helped a lot on the CNN architecture. dos Santos et al. (2015) proposed a similar CNN model, named CR-CNN, by replacing the common so ftmax cost function with a ranking-based cost function. By diminishing the impact of the Other class, they have achieved an F1-score of 84.1%. Along the line of CNNs, Xu et al. (2015a) designed a simple negative sampling method, which introduced additional samples from other corpora like the NYT dataset. Doing so greatly improved the performance to a high F1-score of 85.6%. Liu et al. (2015) proposed a convolutional neural network with a recursive neural network designed to model the subtrees, and achieve an F1-score of 83.6%. Without the use of neural networks, Yu et al. (2014) proposed a Feature-based Compositional Embedding Model (FCM), which combined unlexicalized linguistic contexts and word embeddings. They achieved an F1-score of 83.0%. We make use of three types of information to improve the performance of BRCNN: POS tags, NER features and WordNet hypernyms. Our proposed BRCNN model yields an F1-score of 86.3%, outperforming existing competing approaches. Without using any human-designed features, our model still achieve an F1-score of 85.4%, while the best performance of state-of-theart methods is 84.1% (dos Santos et al., 2015). 3.4 Analysis Table 2 compares our RCNN model with CNNs and RNNs. Model F1 CNN 81.8 LSTM 76.6 Two-channel LSTM 81.5 RCNN 82.4 Table 2: Comparing RCNN with CNNs and RNNS. For a fair comparison, hyperparameters are set according to the performance on validation dataset as BRCNN . CNN with embeddings of words, positions and dependency relations as input achieves an F1-score of 81.8%. LSTM with word embeddings as input only achieves an F1-score of 76.6%, which proves that dependency relations in SDPs play an important role in relation classification. Two-channel LSTM concatenates the pooling layers of words and dependency relations along the shortest dependency path, achieves an F1-score of 81.5% which is still lower than CNN. RCNN captures features from dependency units by combining the advantages of CNN and RNN, and achieves an F1-score of 82.4%. Model Input F1 RCNN −→S of all relations 82.4 Bi-RCNN −→S and ←−S of all relations 81.2 Bi-RCNN −→S and ←−S of directed relations , 84.9 −→S of Other BRCNN −→S and ←−S of directed relations, 85.4 −→S of Other Table 3: Comparing different variants of our model. Bi-RCNN is a variant of BRCNN, which doesn’t have the coarse-grained classifier. −→S and ←−S are shortest dependency paths described in section 2.4. As shown in Table 3, if we inverted the SDP of all relations as input, we observe a performance degradation of 1.2% compared with RCNN. As mentioned in section 3.1, the SemEval-2010 task 8 dataset contains an undirected class Other in addition to 9 directed relations(18 classes). For bidirectional model, it is natural that the inversed Other relation is also in 762 the Other class itself. However, the class Other is used to indicate that relation between two nominals dose not belong to any of the 9 directed classes. Therefore, the class Other is very noisy since it groups many different types of relations with different directions. On the basis of the analysis above, we only inverse the SDP of directed relations. A significant improvement is observed and Bi-RCNN achieves an F1-score of 84.9%. This proves bidirectional representations provide more useful information to classify directed relations. We can see that our model still benefits from the coarse-grained classification, which can help our model learn inherent correlation between directed relations with opposite directions. Compared with Bi-RCNN classifying −→S and ←−S into 19 classes separately, BRCNN also conducts a 10 classes (9 directed relations and Other) classification and improves 0.5% in F1-score. Beyond the relation classification task, we believe that our bidirectional method is general technique, which is not restricted in a specific dataset and has the potential to benefit other NLP tasks. 4 Related Work Relation classification is an important topic in NLP. Traditional Methods for relation classification mainly fall into three classes: feature-based, kernel-based and neural network-based. In feature-based approaches, different types of features are extracted and fed into a classifier. Generally, three types of features are often used. Lexical features concentrate on the entities of interest, e.g., POS. Syntactic features include chunking, parse trees, etc. Semantic features are exemplified by the concept hierarchy, entity class. Kambhatla (2004) used a maximum entropy model for feature combination. Rink and Harabagiu (2010) collected various features, including lexical, syntactic as well as semantic features. In kernel based methods, similarity between two data samples is measured without explicit feature representation. Bunescu and Mooney (2005) designed a kernel along the shortest dependency path between two entities by observing that the relation strongly relies on SDPs. Wang (2008) provided a systematic analysis of several kernels and showed that relation extraction can benefit from combining convolution kernel and syntactic features. Plank and Moschitti (2013) combined structural information and semantic information in a tree kernel. One potential difficulty of kernel methods is that all data information is completely summarized by the kernel function, and thus designing an effective kernel becomes crucial. Recently, deep neural networks are playing an important role in this task. Socher et al. (2012) introduced a recursive neural network model that assigns a matrix-vector representation to every node in a parse tree, in order to learn compositional vector representations for sentences of arbitrary syntactic type and length. Convolutional neural works are widely used in relation classification. Zeng et al. (2014) proposed an approach for relation classification where sentence-level features are learned through a CNN, which has word embedding and position features as its input. In parallel, lexical features were extracted according to given nouns. dos Santos et al. (2015) tackled the relation classification task using a convolutional neural network and proposed a new pairwise ranking loss function, which achieved the state-of-the-art result in SemEval2010 Task 8. Yu et al. (2014) proposed a Factor-based Compositional Embedding Model (FCM) by deriving sentence-level and substructure embeddings from word embeddings, utilizing dependency trees and named entities. It achieved slightly higher accuracy on the same dataset than Zeng et al. (2014), but only when syntactic information is used. Nowadays, many works concentrate on extracting features from the SDP based on neural networks. Xu et al. (2015a) learned robust relation representations from SDP through a CNN, and proposed a straightforward negative sampling strategy to improve the assignment of subjects and objects. Liu et al. (2015) proposed a recursive neural network designed to model the subtrees, and CNN to capture the most important features on the shortest dependency path. Xu et al. (2015b) picked up heterogeneous information along the left and right sub-path of the SDP respectively, leveraging recurrent neural networks with long short term memory units. We propose BRCNN to model the SDP, which can pick up bidirectional information with a combination of LSTM and CNN. 763 5 Conclusion In this paper, we proposed a novel bidirectional neural network BRCNN, to improve the performance of relation classification. The BRCNN model, consisting of two RCNNs, learns features along SDP and inversely at the same time. Information of words and dependency relations are used utilizing a two-channel recurrent neural network with LSTM units. The features of dependency units in SDP are extracted by a convolution layer. We demonstrate the effectiveness of our model by evaluating the model on SemEval-2010 relation classification task. RCNN achieves a better performance at learning features along the shortest dependency path, compared with some common neural networks. A significant improvement is observed when BRCNN is used, outperforming state-of-the-art methods. 6 Acknowledgements Our work is supported by National Natural Science Foundation of China (No.61370117 & No. 61433015) and Major National Social Science Fund of China (No. 12&ZD227). References Razvan C Bunescu and Raymond J Mooney. 2005. A shortest path dependency kernel for relation extraction. In Proceedings of the conference on Human Language, pages 724–731. Cıcero Nogueira dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying relations by ranking with convolutional neural networks. In In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference, pages 626–634. Kazuma Hashimoto, Makoto Miwa, Yoshimasa Tsuruoka, and Takashi Chikayama. 2013. Simple customization of recursive neural networks for semantic relation classification. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1372–1376. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid ´O S´eaghdha, Sebastian Pad´o, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2010. Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. In Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions. Association for Computational Linguistics, pages pages 94–99. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. In Neural computation, pages 1735–1780. Nanda Kambhatla. 2004. Combining lexical, syntactic, and semantic features with maximum entropy models for extracting relations. In Proceedings of the ACL 2004 on Interactive poster and demonstration sessions, page 22. Association for Computational Linguistics. Yang Liu, Furu Wei, Sujian Li, Heng Ji, Ming Zhou, and Houfeng Wang. 2015. A dependency-based neural network for relation classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Joint Conference on Natural Language Processing and the 7th International Joint Conference on Natural Language Processing, pages 285–290. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and JeffDean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119. Barbara Plank and Alessandro Moschitti. 2013. Embeddings semantic similarity in tree kernels for domain adaption of relation extraction. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1498–1507. Bryan Rink and Sanda Harabagiu. 2010. Utd: Classifying semantic relations by combining lexical and semantic resources. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 256–259. Association for Computational Linguistics. Richard Socher, Jeffrey Pennington, Eric H Huang, Andrew Y Ng, and Christopher D Manning. 2011. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of the Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 151–161. Richard Socher, Brody Huval, Christopher D Manning, and Andrew Y Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1201–1211. Mengqiu Wang. 2008. A re-examination of dependency path kernels for relation extraction. In Proceedings of the Third International Joint Conference on Natural Language Processing, pages 841–846. Kun Xu, Yansong Feng, Songfang Huang, and Dongyan Zhao. 2015a. Semantic relation classification via convolutional neural networks with simple negative sampling. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 536–540. 764 Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, and Zhi Jin. 2015b. Classifying relations via long short term memory networks along shortest dependency paths. In Proceedings of Conference on Empirical Methods in Natural Language Processing,, pages 1785–1794. Mo Yu, Matthew Gormley, and Mark Dredze. 2014. Factor-based compositional embedding models. In NIPS Workshop on Learning Semantics, pages 95– 101. Wojciech Zaremba and Ilya Sutskever. 2014. Learning to execute. arXiv preprint arXiv:1410.4615. Mathew D. Zeiler. 2012. An adaptive learning rate method. In arXiv preprint at arXiv:1212.5701. Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation extraction. The Journal of Machine Learning Research, 3:1083–1106. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, Jun Zhao, et al. 2014. Relation classification via convolutional deep neural network. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 2335–2344. 765
2016
72
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 766–777, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Sentence Rewriting for Semantic Parsing Bo Chen Le Sun Xianpei Han Bo An State Key Laboratory of Computer Sciences Institute of Software, Chinese Academy of Sciences, China. {chenbo, sunle, xianpei, anbo}@nfs.iscas.ac.cn Abstract A major challenge of semantic parsing is the vocabulary mismatch problem between natural language and target ontology. In this paper, we propose a sentence rewriting based semantic parsing method, which can effectively resolve the mismatch problem by rewriting a sentence into a new form which has the same structure with its target logical form. Specifically, we propose two sentence-rewriting methods for two common types of mismatch: a dictionary-based method for 1N mismatch and a template-based method for N-1 mismatch. We evaluate our sentence rewriting based semantic parser on the benchmark semantic parsing dataset – WEBQUESTIONS. Experimental results show that our system outperforms the base system with a 3.4% gain in F1, and generates logical forms more accurately and parses sentences more robustly. 1 Introduction Semantic parsing is the task of mapping natural language sentences into logical forms which can be executed on a knowledge base (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Kate and Mooney, 2006; Wong and Mooney, 2007; Lu et al., 2008; Kwiatkowksi et al., 2010). Figure 1 shows an example of semantic parsing. Semantic parsing is a fundamental technique of natural language understanding, and has been used in many applications, such as question answering (Liang et al., 2011; He et al., 2014; Zhang et al., 2016) and information extraction (Krishnamurthy and Mitchell, 2012; Choi et al., 2015; Parikh et al., 2015). Semantic parsing, however, is a challenging Sentence: What is the capital of Germany? Logical form: λx.capital(Germany,x) Result: {Berlin} (Semantic parsing) KB (Execution) Figure 1: An example of semantic parsing. task. Due to the variety of natural language expressions, the same meaning can be expressed using different sentences. Furthermore, because logical forms depend on the vocabulary of targetontology, a sentence will be parsed into different logical forms when using different ontologies. For example, in below the two sentences s1 and s2 express the same meaning, and they both can be parsed into the two different logical forms lf1 and lf2 using different ontologies. s1 What is the population of Berlin? s2 How many people live in Berlin? lf1 λx.population(Berlin,x) lf2 count(λx.person(x)∧live(x,Berlin)) Based on the above observations, one major challenge of semantic parsing is the structural mismatch between a natural language sentence and its target logical form, which are mainly raised by the vocabulary mismatch between natural language and ontologies. Intuitively, if a sentence has the same structure with its target logical form, it is easy to get the correct parse, e.g., a semantic parser can easily parse s1 into lf1 and s2 into lf2. On the contrary, it is difficult to parse a sentence into its logic form when they have different structures, e.g., s1 →lf2 or s2 →lf1. To resolve the vocabulary mismatch problem, 766 (a) An example using traditional method s0 : What is the name of Sonia Gandhis daughter? l0 : λx.child(S.G.,x) r0 : {Rahul Gandhi (Wrong answer), Priyanka Vadra} (b) An example using our method s0 : What is the name of Sonia Gandhis daughter? s1 : What is the name of Sonia Gandhis female child? l1 : λx.child(S.G.,x)∧gender(x,female) r1 : {Priyanka Vadra} Table 1: Examples of (a) sentences s0, possible logical form l0 from traditional semantic parser, result r0 for the logical form l0; (b) possible sentence s1 from rewriting for the original sentence s0, possible logical form l1 for sentence s1, result r1 for l1. Rahul Gandhi is a wrong answer, as he is the son of Sonia Gandhi. this paper proposes a sentence rewriting approach for semantic parsing, which can rewrite a sentence into a form which will have the same structure with its target logical form. Table 1 gives an example of our rewriting-based semantic parsing method. In this example, instead of parsing the sentence “What is the name of Sonia Gandhis daughter?” into its structurally different logical form childOf.S.G.∧gender.female directly, our method will first rewrite the sentence into the form “What is the name of Sonia Gandhis female child?”, which has the same structure with its logical form, then our method will get the logical form by parsing this new form. In this way, the semantic parser can get the correct parse more easily. For example, the parse obtained through traditional method will result in the wrong answer “Rahul Gandhi”, because it cannot identify the vocabulary mismatch between “daughter” and child∧female1. By contrast, by rewriting “daughter” into “female child”, our method can resolve this vocabulary mismatch. Specifically, we identify two common types of vocabulary mismatch in semantic parsing: 1. 1-N mismatch: a simple word may correspond to a compound formula. For example, the word “daughter” may correspond to the compound formula child∧female. 2. N-1 mismatch: a logical constant may correspond to a complicated natural language expression, e.g., the formula population can be expressed using many phrases such as “how many people” and “live in”. 1In this paper, we may simplify logical forms for readability, e.g., female for gender.female. To resolve the above two vocabulary mismatch problems, this paper proposes two sentence rewriting algorithms: One is a dictionary-based sentence rewriting algorithm, which can resolve the 1-N mismatch problem by rewriting a word using its explanation in a dictionary. The other is a template-based sentence rewriting algorithm, which can resolve the N-1 mismatch problem by rewriting complicated expressions using paraphrase template pairs. Given the generated rewritings of a sentence, we propose a ranking function to jointly choose the optimal rewriting and the correct logical form, by taking both the rewriting features and the semantic parsing features into consideration. We conduct experiments on the benchmark WEBQUESTIONS dataset (Berant et al., 2013). Experimental results show that our method can effectively resolve the vocabulary mismatch problem and achieve accurate and robust performance. The rest of this paper is organized as follows. Section 2 reviews related work. Section 3 describes our sentence rewriting method for semantic parsing. Section 4 presents the scoring function which can jointly ranks rewritings and logical forms. Section 5 discusses experimental results. Section 6 concludes this paper. 2 Related Work Semantic parsing has attracted considerable research attention in recent years. Generally, semantic parsing methods can be categorized into synchronous context free grammars (SCFG) based methods (Wong and Mooney, 2007; Arthur et al., 2015; Li et al., 2015), syntactic structure based methods (Ge and Mooney, 2009; Reddy et al., 2014; Reddy et al., 2016), combinatory categorical grammars (CCG) based methods (Zettlemoyer and Collins, 2007; Kwiatkowksi et al., 2010; Kwiatkowski et al., 2011; Krishnamurthy and Mitchell, 2014; Wang et al., 2014; Artzi et al., 2015), and dependency-based compositional semantics (DCS) based methods (Liang et al., 2011; Berant et al., 2013; Berant and Liang, 2014; Berant and Liang, 2015; Pasupat and Liang, 2015; Wang et al., 2015). One major challenge of semantic parsing is how to scale to open-domain situation like Freebase and Web. A possible solution is to learn lexicons from large amount of web text and a knowledge base using a distant supervised method (Krishna767 murthy and Mitchell, 2012; Cai and Yates, 2013a; Berant et al., 2013). Another challenge is how to alleviate the burden of annotation. A possible solution is to employ distant-supervised techniques (Clarke et al., 2010; Liang et al., 2011; Cai and Yates, 2013b; Artzi and Zettlemoyer, 2013), or unsupervised techniques (Poon and Domingos, 2009; Goldwasser et al., 2011; Poon, 2013). There were also several approaches focused on the mismatch problem. Kwiatkowski et al. (2013) addressed the ontology mismatch problem (i.e., two ontologies using different vocabularies) by first parsing a sentence into a domainindependent underspecified logical form, and then using an ontology matching model to transform this underspecified logical form to the target ontology. However, their method is still hard to deal with the 1-N and the N-1 mismatch problems between natural language and target ontologies. Berant and Liang (2014) addressed the structure mismatch problem between natural language and ontology by generating a set of canonical utterances for each candidate logical form, and then using a paraphrasing model to rerank the candidate logical forms. Their method addresses mismatch problem in the reranking stage, cannot resolve the mismatch problem when constructing candidate logical forms. Compared with these two methods, we approach the mismatch problem in the parsing stage, which can greatly reduce the difficulty of constructing the correct logical form, through rewriting sentences into the forms which will be structurally consistent with their target logic forms. Sentence rewriting (or paraphrase generation) is the task of generating new sentences that have the same meaning as the original one. Sentence rewriting has been used in many different tasks, e.g., used in statistical machine translation to resolve the word order mismatch problem (Collins et al., 2005; He et al., 2015). To our best knowledge, this paper is the first work to apply sentence rewriting for vocabulary mismatch problem in semantic parsing. 3 Sentence Rewriting for Semantic Parsing As discussed before, the vocabulary mismatch between natural language and target ontology is a big challenge in semantic parsing. In this section, we describe our sentence rewriting algorithm for Word Logical Form Wiktionary Explanation son child∧male male child actress actor∧female female actor father parent∧male male parent grandaprent parent∧parent parent of one’s parent brother sibling∧male male sibling Table 2: Several examples of words, their logical forms and their explanations in Wiktionary. solving the mismatch problem. Specifically, we solve the 1-N mismatch problem by dictionarybased rewriting and solve the N-1 mismatch problem by template-based rewriting. The details are as follows. 3.1 Dictionary-based Rewriting In the 1-N mismatch case, a word will correspond to a compound formula, e.g., the target logical form of the word “daughter” is child∧female (Table 2 has more examples). To resolve the 1-N mismatch problem, we rewrite the original word (“daughter”) into an expression (“female child”) which will have the same structure with its target logical form (child∧female). In this paper, we rewrite words using their explanations in a dictionary. This is because each word in a dictionary will be defined by a detailed explanation using simple words, which often will have the same structure with its target formula. Table 2 shows how the vocabulary mismatch between a word and its logical form can be resolved using its dictionary explanation. For instance, the word “daughter” is explained as “female child” in Wiktionary, which has the same structure as child∧female. In most cases, only common nouns will result in the 1-N mismatch problem. Therefore, in order to control the size of rewritings, this paper only rewrite the common nouns in a sentence by replacing them with their dictionary explanations. Because a sentence usually will not contain too many common nouns, the size of candidate rewritings is thus controllable. Given the generated rewritings of a sentence, we propose a sentence selection model to choose the best rewriting using multiple features (See details in Section 4). Table 3 shows an example of the dictionarybased rewriting. In Table 3, the example sentence s contains two common nouns (“name” and “daughter”), therefore we will generate three rewritings r1, r2 and r3. Among these rewritings, 768 s : What is the name of Sonia Gandhis daughter? r1: What is the reputation of Sonia Gandhis daughter? r2: What is the name of Sonia Gandhis female child? r3: What is the reputation of Sonia Gandhis female child? Table 3: An example of the dictionary-based sentence rewriting. the candidate rewriting r2 is what we expected, as it has the same structure with the target logical form and doesn’t bring extra noise (i.e., replacing “name” with its explanation “reputation”). For the dictionary used in rewriting, this paper uses Wiktionary. Specifically, given a word, we use its “Translations” part in the Wiktionary as its explanation. Because most of the 1-N mismatch are caused by common nouns, we only collect the explanations of common nouns. Furthermore, for polysomic words which have several explanations, we only use their most common explanations. Besides, we ignore explanations whose length are longer than 5. 3.2 Template-based Rewriting In the N-1 mismatch case, a complicated natural language expression will be mapped to a single logical constant. For example, considering the following mapping from the natural language sentence s to its logical form lf based on Freebase ontology: s: How many people live in Berlin? lf: λx.population(Berlin,x) where the three words: “how many” (count), “people” (people) and “live in” (live) will map to the predicate population together. Table 4 shows more N-1 examples. Expression Logical constant how many, people, live in population how many, people, visit, annually annual-visit what money, use currency what school, go to education what language, speak, officially officiallanguage Table 4: Several N-1 mismatch examples. To resolve the N-1 mismatch problem, we propose a template rewriting algorithm, which can rewrite a complicated expression into its simpler form. Specifically, we rewrite sentences based on a set of paraphrase template pairs P = {(ti1, ti2)|i = 1, 2, ..., n}, where each template t Template 1 Template 2 How many people live in $y What is the population of $y What money in $y is used What is the currency of $y What school did $y go to What is the education of $y What language does $y speak officially What is the official language of $y Table 5: Several examples of paraphrase template pairs. is a sentence with an argument slot $y, and ti1 and ti2 are paraphrases. In this paper, we only consider single-slot templates. Table 5 shows several paraphrase template pairs. Given the template pair database and a sentence, our template-based rewriting algorithm works as follows: 1. Firstly, we generate a set of candidate templates ST = {st1, st2, ..., stn} of the sentence by replacing each named entity within it by “$y”. For example, we will generate template “How many people live in $y” from the sentence “How many people live in Berlin”. 2. Secondly, using the paraphrase template pair database, we retrieve all possible rewriting template pairs (t1, t2) with t1 ∈ST, e.g., we can retrieve template pair (“How many people live there in $y”, “What is the population of $y” for t2) using the above ST. 3. Finally, we get the rewritings by replacing the argument slot “$y” in template t2 with the corresponding named entity. For example, we get a new candidate sentence “What is the population of Berlin” by replacing “$y” in t2 with Berlin. In this way we can get the rewriting we expected, since this rewriting will match its target logical form population(Berlin). To control the size and measure the quality of rewritings using a specific template pair, we also define several features and the similarity between template pairs (See Section 4 for details). To build the paraphrase template pair database, we employ the method described in Fader et al. (2014) to automatically collect paraphrase template pairs. Specifically, we use the WikiAnswers paraphrase corpus (Fader et al., 2013), which contains 23 million question-clusters, and all ques769 How many people live in chembakolli? How many people is in chembakolli? How many people live in chembakolli india? How many people live there chembakolli? How many people live there in chembakolli? What is the population of Chembakolli india? What currency is used on St Lucia? What is st lucia money? What is the money used in st lucia? What kind of money did st lucia have? What money do st Lucia use? Which money is used in St Lucia? Table 6: Two paraphrase clusters from the WikiAnswers corpus. tions in the same cluster express the same meaning. Table 6 shows two paraphrase clusters from the WikiAnswers corpus. To build paraphrase template pairs, we first replace the shared noun words in each cluster with the placeholder “$y”, then each two templates in a cluster will form a paraphrase template pair. To filter out noisy template pairs, we only retain salient paraphrase template pairs whose co-occurrence count is larger than 3. 4 Sentence Rewriting based Semantic Parsing In this section we describe our semantic rewriting based semantic parsing system. Figure 2 presents the framework of our system. Given a sentence, we first rewrite it into a set of new sentences, then we generate candidate logical forms for each new sentence using a base semantic parser, finally we score all logical forms using a scoring function and output the best logical form as the final result. In following, we first introduce the used base semantic parser, then we describe the proposed scoring function. 4.1 Base Semantic Parser In this paper, we produce logical forms for each sentence rewritings using an agenda-based semantic parser (Berant and Liang, 2015), which is based on the lambda-DCS proposed by Liang (2013). For parsing, we use the lexicons and the grammars released by Berant et al. (2013), where lexicons are used to trigger unary and binary predicates, and grammars are used to conduct logical forms. The only difference is that we also use the composition rule to make the parser can handle complicated questions involving two binary predicates, e.g., child.obama∧gender.female. Original sentence New sentences Logical forms Results (Sentence rewriting) (Semantic parsing) (Executing) Figure 2: The framework of our sentence rewriting based semantic parsing. For model learning and sentence parsing, the base semantic parser learned a scoring function by modeling the policy as a log-linear distribution over (partial) agenda derivations Q: pθ(a|s) = exp{φ(a)T θ)} P a′∈A exp{φ(a′)T θ)} (1) The policy parameters are updated as follows: θ ←θ + ηR(htarget) XT t=1 δ(htarget) (2) δt(h) = ∇θ log pθ(at|st) = φ(at) −Epθ(a′ t|st)[φ(a′ t)] (3) The reward function R(h) measures the compatibility of the resulting derivation, and η is the learning rate which is set using the AdaGrad algorithm (Duchi et al., 2011). The target history htarget is generated from the root derivation d∗with highest reward out of the K (beam size) root derivations, using local reweighting and history compression. 4.2 Scoring Function To select the best semantic parse, we propose a scoring function which can take both sentence rewriting features and semantic parsing features into consideration. Given a sentence x, a generated rewriting x′ and the derivation d of x′, we score them using follow function: score(x, x′, d) = θ · φ(x, x′, d) = θ1 · φ(x, x′) + θ2 · φ(x′, d) This scoring function is decomposed into two parts: one for sentence rewriting – θ1 · φ(x, x′) and the other for semantic parsing – θ2 · φ(x′, d). Following Berant and Liang (2015), we update the parameters θ2 of semantic parsing features as the 770 Input: Q/A pairs {(xi, yi) : i = 1...n}; Knowledge base K; Number of sentences N; Number of iterations T. Definitions: The function REWRITING(xi) returns a set of candidate sentences by applying sentence rewriting on sentence x; PARSE(pθ, x) parses the sentence x based on current parameters θ, using agendabased parsing; CHOOSEORACLE(h0) chooses the derivation with highest reward from the root of h0; CHOOSEORACLE(Htarget) chooses the derivation with highest reward from a set of derivations. CHOOSEORACLE(h∗ target) chooses the new sentence that results in derivation with highest reward. Algorithm: θ1 ←0, θ2 ←0 for t = 1...T, i = 1...N: X = REWRITING(xi) for each x′ i ∈X : h0 ←PARSE(pθ, x′ i) d∗←CHOOSEORACLE(h0) htarget ←PARSE(p+cw θ , x′ i) h∗ target ←CHOOSEORACLE(Htarget) x′∗ i ←CHOOSEORACLE(h∗ target) θ2 ←θ2 + ηR(h∗ target) PT t=1 δ(h∗ target) θ1 ←θ1 + ηR(h∗ target)δ(xi, x′∗ i ) Output: Estimated parameters θ1 and θ2. Table 7: Our learning algorithm for parameter estimation from question-answer pairs. same as (2). Similarly, the parameters θ1 of sentence rewriting features are updated as follows: θ1 ←θ1 + ηR(h∗ target)δ(x, x′∗) δ(x, x′∗) = ∇log pθ1(x′∗|x) = φ(x, x′∗) −Epθ1(x′|x)[φ(x, x′)] where the learning rate η is set using the same algorithm in Formula (2). 4.3 Parameter Learning Algorithm To estimate the parameters θ1 and θ2, our learning algorithm uses a set of question-answer pairs (xi, yi). Following Berant and Liang (2015), our updates for θ1 and θ2 do not maximize reward nor the log-likelihood. However, the reward provides a way to modulate the magnitude of the updates. Specifically, after each update, our model results in making the derivation, which has the highest reward, to get a bigger score. Table 7 presents our learning algorithm. 4.4 Features As described in Section 4.3, our model uses two kinds of features. One for the semantic parsing module which are simply the same features described in Berant and Liang (2015). One for the sentence rewriting module these features are defined over the original sentence, the generated sentence rewritings and the final derivations: Features for dictionary-based rewriting. Given a sentence s0, when the new sentence s1 is generated by replacing a word to its explanation w → ex, we will generate four features: The first feature indicates the word replaced. The second feature indicates the replacement w →ex we used. The final two features are the POS tags of the left word and the right word of w in s0. Features for template-based rewriting. Given a sentence s0, when the new sentence s1 is generated through a template based rewriting t1 →t2, we generate four features: The first feature indicates the template pair (t1, t2) we used. The second feature is the similarity between the sentence s0 and the template t1, which is calculated using the word overlap between s0 and t1. The third feature is the compatibility of the template pair, which is the pointwise mutual information (PMI) between t1 and t2 in the WikiAnswers corpus. The final feature is triggered when the target logical form only contains an atomic formula (or predicate), and this feature indicates the mapping from template t2 to the predicate p. 5 Experiments In this section, we assess our method and compare it with other methods. 5.1 Experimental Settings Dataset: We evaluate all systems on the benchmark WEBQUESTIONS dataset (Berant et al., 2013), which contains 5,810 question-answer pairs. All questions are collected by crawling the Google Suggest API, and their answers are obtained using Amazon Mechanical Turk. This dataset covers several popular topics and its questions are commonly asked on the web. According to Yao (2015), 85% of questions can be answered by predicting a single binary relation. In our experiments, we use the standard train-test split (Berant et al., 2013), i.e., 3,778 questions (65%) for training and 2,032 questions (35%) for testing, and divide the training set into 3 random 80%-20% splits for development. Furthermore, to verify the effectiveness of our method on solving the vocabulary mismatch problem, we manually select 50 mismatch test examples from the WEBQUESTIONS dataset, where 771 all sentences have different structure with their target logical forms, e.g., “Who is keyshia cole dad?” and “What countries have german as the official language?”. System Settings: In our experiments, we use the Freebase Search API for entity lookup. We load Freebase using Virtuoso, and execute logical forms by converting them to SPARQL and querying using Virtuoso. We learn the parameters of our system by making three passes over the training dataset, with the beam size K = 200, the dictionary rewriting size KD = 100, and the template rewriting size KT = 100. Baselines: We compare our method with several traditional systems, including semantic parsing based systems (Berant et al., 2013; Berant and Liang, 2014; Berant and Liang, 2015; Yih et al., 2015), information extraction based systems (Yao and Van Durme, 2014; Yao, 2015), machine translation based systems (Bao et al., 2014), embedding based systems (Bordes et al., 2014; Yang et al., 2014), and QA based system (Bast and Haussmann, 2015). Evaluation: Following previous work (Berant et al., 2013), we evaluate different systems using the fraction of correctly answered questions. Because golden answers may have multiple values, we use the average F1 score as the main evaluation metric. 5.2 Experimental Results Table 8 provides the performance of all base-lines and our method. We can see that: 1. Our method achieved competitive performance: Our system outperforms all baselines and get the best F1-measure of 53.1 on WEBQUESTIONS dataset. 2. Sentence rewriting is a promising technique for semantic parsing: By employing sentence rewriting, our system gains a 3.4% F1 improvement over the base system we used (Berant and Liang, 2015). 3. Compared to all baselines, our system gets the highest precision. This result indicates that our parser can generate more-accurate logical forms by sentence rewriting. Our system also achieves the second highest recall, which is a competitive performance. Interestingly, both the two systems with the highest recall (Bast and Haussmann, 2015; Yih et al., System Prec. Rec. F1 (avg) Berant et al., 2013 48.0 41.3 35.7 Yao and Van-Durme, 2014 51.7 45.8 33.0 Berant and Liang, 2014 40.5 46.6 39.9 Bao et al., 2014 – – 37.5 Bordes et al., 2014a – – 39.2 Yang et al., 2014 – – 41.3 Bast and Haussmann, 2015 49.8 60.4 49.4 Yao, 2015 52.6 54.5 44.3 Berant and Liang, 2015 50.5 55.7 49.7 Yih et al., 2015 52.8 60.7 52.5 Our approach 53.7 60.0 53.1 Table 8: The results of our system and recently published systems. The results of other systems are from either original papers or the standard evaluation web. 2015) rely on extra-techniques such as entity linking and relation matching. The effectiveness on mismatch problem. To analyze the commonness of mismatch problem in semantic parsing, we randomly sample 500 questions from the training data and do manually analysis, we found that 12.2% out of the sampled questions have mismatch problems: 3.8% out of them have 1-N mismatch problem and 8.4% out of them have N-1 mismatch problem. To verify the effectiveness of our method on solving the mismatch problem, we conduct experiments on the 50 mismatch test examples and Table 9 shows the performance. We can see that our system can effectively resolve the mismatch between natural language and target ontology: compared to the base system, our system achieves a significant 54.5% F1 im-provement. System Prec. Rec. F1 (avg) Base system 31.4 43.9 29.4 Our system 83.3 92.3 83.9 Table 9: The results on the 50 mismatch test dataset. When scaling a semantic parser to open-domain situation or web situation, the mismatch problem will be more common as the ontology and language complexity increases (Kwiatkowski et al., 2013). Therefore we believe the sentence rewriting method proposed in this paper is an important technique for the scalability of semantic parser. The effect of different rewriting algorithms. To analyze the contribution of different rewriting methods, we perform experiments using different sentence rewriting methods and the results are presented in Table 10. We can see that: 772 Method Prec. Rec. F1 (avg) base 49.8 55.3 49.1 + dictionary SR (only) 51.6 57.5 50.9 + template SR (only) 52.9 59.0 52.3 + both 53.7 60.0 53.1 Table 10: The results of the base system and our systems on the 2032 test questions. 1. Both sentence rewriting methods improved the parsing performance, they resulted in 1.8% and 3.2% F1 improvements respectively2. 2. Compared with the dictionary-based rewriting method, the template-based rewriting method can achieve higher performance improvement. We believe this is because N-1 mismatch problem is more common in the WEBQUESTIONS dataset. 3. The two rewriting methods are good complementary of each other. The semantic parser can achieve a higher performance improvement when using these two rewriting methods together. The effect on improving robustness. We found that the template-based rewriting method can greatly improve the robustness of the base semantic parser. Specially, the template-based method can rewrite similar sentences into a uniform template, and the (template, predicate) feature can provide additional information to reduce the uncertainty during parsing. For example, using only the uncertain alignments from the words “people” and “speak” to the two predicates official language and language spoken, the base parser will parse the sentence “What does jamaican people speak?” into the incorrect logical form official language.jamaican in our experiments, rather than into the correct form language spoken.jamaican (See the final example in Table 11). By exploiting the alignment from the template “what language does $y people speak” to the predicate , our system can parse the above sentence correctly. The effect on OOV problem. We found that the sentence rewriting method can also provide extra 2Our base system yields a slight drop in accuracy compared to the original system (Berant and Liang, 2015), as we parallelize the learning algorithm, and the order of the data for updating the parameter is different to theirs. O Who is willow smith mom name? R Who is willow smith female parent name? LF parentOf.willow smith∧gender.female O Who was king henry viii son? R Who was king henry viii male child? LF childOf.king henry∧gender.male O What are some of the traditions of islam? R What is of the religion of islam? LF religionOf.islam O What does jamaican people speak? R What language does jamaican people speak? LF language spoken.jamaica Table 11: Examples which our system generates more accurate logical form than the base semantic parser. O is the original sentence; R is the generated sentence from sentence rewriting (with the highest score for the model, including rewriting part and parsing part); LF is the target logical form. profit for solving the OOV problem. Traditionally, if a sentence contains a word which is not covered by the lexicon, it will cannot be correctly parsed. However, with the help of sentence rewriting, we may rewrite the OOV words into the words which are covered by our lexicons. For example, in Table 11 the 3rd question “What are some of the traditions of islam?” cannot be correctly parsed as the lexicons dont cover the word “tradition”. Through sentence rewriting, we can generate a new sentence “What is of the religion of islam?”, where all words are covered by the lexicons, in this way the sentence can be correctly parsed. 5.3 Error Analysis To better understand our system, we conduct error analysis on the parse results. Specifically, we randomly choose 100 questions which are not correctly answered by our system. We found that the errors are mainly raised by following four reasons (See Table 12 for detail): Reason #(Ratio) Sample Example Label issue 38 What band was george clinton in? N-ary predicate(n > 2) 31 What year did the seahawks win the superbowl? Temporal clause 15 Who was the leader of the us during wwii? Superlative 8 Who was the first governor of colonial south carolina? Others 8 What is arkansas state capitol? Table 12: The main reasons of parsing errors, the ratio and an example for each reason are also provided. 773 The first reason is the label issue. The main label issue is incompleteness, i.e., the answers of a question may not be labeled completely. For example, for the question “Who does nolan ryan play for?”, our system returns 4 correct teams but the golden answer only contain 2 teams. One another label issue is the error labels. For example, the gold answer of the question “What state is barack obama from?” is labeled as “Illinois”, however, the correct answer is “Hawaii”. The second reason is the n-ary predicate problem (n > 2). Currently, it is hard for a parser to conduct the correct logical form of n-ary predicates. For example, the question “What year did the seahawks win the superbowl?” describes an nary championship event, which gives the championship and the champion of the event, and expects the season. We believe that more research attentions should be given on complicated cases, such as the n-ary predicates parsing. The third reason is temporal clause. For example, the question “Who did nasri play for before arsenal?” contains a temporal clause “before”. We found temporal clause is complicated and makes it strenuous for the parser to understand the sentence. The fourth reason is superlative case, which is a hard problem in semantic parsing. For example, to answer “What was the name of henry viii first wife?”, we should choose the first one from a list ordering by time. Unfortunately, it is difficult for the current parser to decide what to be ordered and how to order. There are also many other miscellaneous error cases, such as spelling error in the question, e.g., “capitol” for “capital”, “mary” for “marry”. 6 Conclusions In this paper, we present a novel semantic parsing method, which can effectively deal with the mismatch between natural language and target ontology using sentence rewriting. We resolve two common types of mismatch (i) one word in natural language sentence vs one compound formula in target ontology (1-N), (ii) one complicated expression in natural language sentence vs one formula in target ontology (N-1). Then we present two sentence rewriting methods, dictionary-based method for 1-N mismatch and template-based method for N-1 mismatch. The resulting system significantly outperforms the base system on the WEBQUESTIONS dataset. Currently, our approach only leverages simple sentence rewriting methods. In future work, we will explore more advanced sentence rewriting methods. Furthermore, we also want to employ sentence rewriting techniques for other challenges in semantic parsing, such as the spontaneous, unedited natural language input, etc. Acknowledgments We sincerely thank the reviewers for their valuable comments and suggestions. This work is supported by the National High Technology Development 863 Program of China under Grants no. 2015AA015405, and the National Natural Science Foundation of China under Grants no. 61433015, 612722324 and 61572477. References Philip Arthur, Graham Neubig, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2015. Semantic parsing of ambiguous input through paraphrasing and verification. Transactions of the Association for Computational Linguistics, 3:571–584. Yoav Artzi and Luke Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics, 1(1):49–62. Yoav Artzi, Kenton Lee, and Luke Zettlemoyer. 2015. Broad-coverage ccg semantic parsing with amr. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1699–1710, Lisbon, Portugal, September. Association for Computational Linguistics. Junwei Bao, Nan Duan, Ming Zhou, and Tiejun Zhao. 2014. Knowledge-based question answering as machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 967– 976, Baltimore, Maryland, June. Association for Computational Linguistics. Hannah Bast and Elmar Haussmann. 2015. More accurate question answering on freebase. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, CIKM 2015, Melbourne, VIC, Australia, October 19 - 23, 2015, pages 1431–1440. Jonathan Berant and Percy Liang. 2014. Semantic parsing via paraphrasing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1415–1425, Baltimore, Maryland, June. Association for Computational Linguistics. 774 Jonathan Berant and Percy Liang. 2015. Imitation learning of agenda-based semantic parsers. Transactions of the Association for Computational Linguistics, 3:545–558. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1533–1544, Seattle, Washington, USA, October. Association for Computational Linguistics. Antoine Bordes, Sumit Chopra, and Jason Weston. 2014. Question answering with subgraph embeddings. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 615–620, Doha, Qatar, October. Association for Computational Linguistics. Qingqing Cai and Alexander Yates. 2013a. Largescale semantic parsing via schema matching and lexicon extension. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 423–433, Sofia, Bulgaria, August. Association for Computational Linguistics. Qingqing Cai and Alexander Yates. 2013b. Semantic parsing freebase: Towards open-domain semantic parsing. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity, pages 328–338, Atlanta, Georgia, USA, June. Association for Computational Linguistics. Eunsol Choi, Tom Kwiatkowski, and Luke Zettlemoyer. 2015. Scalable semantic parsing with partial ontologies. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1311–1320, Beijing, China, July. Association for Computational Linguistics. James Clarke, Dan Goldwasser, Ming-Wei Chang, and Dan Roth. 2010. Driving semantic parsing from the world’s response. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning, pages 18–27, Uppsala, Sweden, July. Association for Computational Linguistics. Michael Collins, Philipp Koehn, and Ivona Kucerova. 2005. Clause restructuring for statistical machine translation. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 531–540, Ann Arbor, Michigan, June. Association for Computational Linguistics. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res., 12:2121–2159, July. Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2013. Paraphrase-driven learning for open question answering. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1608–1618, Sofia, Bulgaria, August. Association for Computational Linguistics. Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2014. Open question answering over curated and extracted knowledge bases. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’14, pages 1156–1165, New York, NY, USA. ACM. Ruifang Ge and Raymond Mooney. 2009. Learning a compositional semantic parser using an existing syntactic parser. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 611–619, Suntec, Singapore, August. Association for Computational Linguistics. Dan Goldwasser, Roi Reichart, James Clarke, and Dan Roth. 2011. Confidence driven unsupervised semantic parsing. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1486–1495, Portland, Oregon, USA, June. Association for Computational Linguistics. Shizhu He, Kang Liu, Yuanzhe Zhang, Liheng Xu, and Jun Zhao. 2014. Question answering over linked data using first-order logic. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1092– 1103, Doha, Qatar, October. Association for Computational Linguistics. He He, Alvin Grissom II, John Morgan, Jordan BoydGraber, and Hal Daum´e III. 2015. Syntax-based rewriting for simultaneous machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 55–64, Lisbon, Portugal, September. Association for Computational Linguistics. Rohit J. Kate and Raymond J. Mooney. 2006. Using string-kernels for learning semantic parsers. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 913–920, Sydney, Australia, July. Association for Computational Linguistics. Jayant Krishnamurthy and Tom Mitchell. 2012. Weakly supervised training of semantic parsers. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 754–765, Jeju Island, Korea, July. Association for Computational Linguistics. 775 Jayant Krishnamurthy and Tom M. Mitchell. 2014. Joint syntactic and semantic parsing with combinatory categorial grammar. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1188–1198, Baltimore, Maryland, June. Association for Computational Linguistics. Tom Kwiatkowksi, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2010. Inducing probabilistic CCG grammars from logical form with higherorder unification. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1223–1233, Cambridge, MA, October. Association for Computational Linguistics. Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2011. Lexical generalization in ccg grammar induction for semantic parsing. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1512–1523, Edinburgh, Scotland, UK., July. Association for Computational Linguistics. Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1545–1556, Seattle, Washington, USA, October. Association for Computational Linguistics. Junhui Li, Muhua Zhu, Wei Lu, and Guodong Zhou. 2015. Improving semantic parsing with enriched synchronous context-free grammar. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1455–1465, Lisbon, Portugal, September. Association for Computational Linguistics. Percy Liang, Michael Jordan, and Dan Klein. 2011. Learning dependency-based compositional semantics. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 590–599, Portland, Oregon, USA, June. Association for Computational Linguistics. P. Liang. 2013. Lambda dependency-based compositional semantics. arXiv preprint arXiv:1309.4408. Wei Lu, Hwee Tou Ng, Wee Sun Lee, and Luke S. Zettlemoyer. 2008. A generative model for parsing natural language to meaning representations. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 783–792, Honolulu, Hawaii, October. Association for Computational Linguistics. Ankur P. Parikh, Hoifung Poon, and Kristina Toutanova. 2015. Grounded semantic parsing for complex knowledge extraction. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 756– 766, Denver, Colorado, May–June. Association for Computational Linguistics. P. Pasupat and P. Liang. 2015. Compositional semantic parsing on semi-structured tables. In Association for Computational Linguistics (ACL). Hoifung Poon and Pedro Domingos. 2009. Unsupervised semantic parsing. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 1–10, Singapore, August. Association for Computational Linguistics. Hoifung Poon. 2013. Grounded unsupervised semantic parsing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 933–943, Sofia, Bulgaria, August. Association for Computational Linguistics. Siva Reddy, Mirella Lapata, and Mark Steedman. 2014. Large-scale semantic parsing without question-answer pairs. Transactions of the Association for Computational Linguistics, 2:377–392. Siva Reddy, Oscar T¨ackstr¨om, Michael Collins, Tom Kwiatkowski, Dipanjan Das, Mark Steedman, and Mirella Lapata. 2016. Transforming Dependency Structures to Logical Forms for Semantic Parsing. Transactions of the Association for Computational Linguistics, 4:127–140. Adrienne Wang, Tom Kwiatkowski, and Luke Zettlemoyer. 2014. Morpho-syntactic lexical generalization for ccg semantic parsing. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1284– 1295, Doha, Qatar, October. Association for Computational Linguistics. Yushi Wang, Jonathan Berant, and Percy Liang. 2015. Building a semantic parser overnight. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1332–1342, Beijing, China, July. Association for Computational Linguistics. Yuk Wah Wong and Raymond Mooney. 2007. Learning synchronous grammars for semantic parsing with lambda calculus. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 960–967, Prague, Czech Republic, June. Association for Computational Linguistics. Min-Chul Yang, Nan Duan, Ming Zhou, and HaeChang Rim. 2014. Joint relational embeddings for knowledge-based question answering. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 645–650, Doha, Qatar, October. Association for Computational Linguistics. 776 Xuchen Yao and Benjamin Van Durme. 2014. Information extraction over structured data: Question answering with freebase. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 956–966, Baltimore, Maryland, June. Association for Computational Linguistics. Xuchen Yao. 2015. Lean question answering over freebase from scratch. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 66–70, Denver, Colorado, June. Association for Computational Linguistics. Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1321–1331, Beijing, China, July. Association for Computational Linguistics. John M. Zelle and Raymond J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In AAAI/IAAI, pages 1050–1055, Portland, OR, August. AAAI Press/MIT Press. Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In UAI ’05, Proceedings of the 21st Conference in Uncertainty in Artificial Intelligence, Edinburgh, Scotland, July 26-29, 2005, pages 658–666. Luke Zettlemoyer and Michael Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 678–687, Prague, Czech Republic, June. Association for Computational Linguistics. Yuanzhe Zhang, Shizhu He, Kang Liu, and Jun Zhao. 2016. A joint model for question answering over multiple knowledge bases. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA., pages 3094–3100. 777
2016
73
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 778–788, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Chinese Zero Pronoun Resolution with Deep Neural Networks Chen Chen and Vincent Ng Human Language Technology Research Institute University of Texas at Dallas Richardson, TX 75083-0688 {yzcchen,vince}@hlt.utdallas.edu Abstract While unsupervised anaphoric zero pronoun (AZP) resolvers have recently been shown to rival their supervised counterparts in performance, it is relatively difficult to scale them up to reach the next level of performance due to the large amount of feature engineering efforts involved and their ineffectiveness in exploiting lexical features. To address these weaknesses, we propose a supervised approach to AZP resolution based on deep neural networks, taking advantage of their ability to learn useful task-specific representations and effectively exploit lexical features via word embeddings. Our approach achieves stateof-the-art performance when resolving the Chinese AZPs in the OntoNotes corpus. 1 Introduction A zero pronoun (ZP) is a gap in a sentence that is found when a phonetically null form is used to refer to a real-world entity. An anaphoric zero pronoun (AZP) is a ZP that corefers with one or more preceding mentions in the associated text. Below is an example taken from the Chinese Treebank (CTB), where the ZP (denoted as *pro*) refers to 俄罗斯(Russia). [俄罗斯] 作为米洛舍夫维奇一贯的支持者, *pro* 曾经提出调停这场政治危机。 ([Russia] is a consistent supporter of Milošević, *pro* has proposed to mediate the political crisis.) As we can see, ZPs lack grammatical attributes that are useful for overt pronoun resolution such as number and gender. This makes ZP resolution more challenging than overt pronoun resolution. Automatic ZP resolution is typically composed of two steps. The first step, AZP identification, involves extracting ZPs that are anaphoric. The second step, AZP resolution, aims to identify an antecedent of an AZP. State-of-the-art ZP resolvers have tackled both of these steps in a supervised manner, training one classifier for AZP identification and another for AZP resolution (e.g., Zhao and Ng (2007), Kong and Zhou (2010)). More recently, Chen and Ng (2014b; 2015) have proposed unsupervised probabilistic AZP resolution models (henceforth the CN14 model and the CN15 model, respectively) that rival their supervised counterparts in performance. An appealing aspect of these unsupervised models is that their language-independent generative process enables them to be applied to languages where data annotated with ZP links are not readily available. Though achieving state-of-the-art performance, these models have several weaknesses. First, a lot of manual efforts need to be spent on engineering the features for generative probabilistic models, as these models are sensitive to the choice of features. For instance, having features that are (partially) dependent on each other could harm model performance. Second, in the absence of labeled data, it is difficult, though not impossible, for these models to profitably employ lexical features (e.g., word pairs, syntactic patterns involving words), as determining which lexical features are useful and how to combine the potentially large number of lexical features in an unsupervised manner is a very challenging task. In fact, the unsupervised models proposed by Chen and Ng (2014b; 2015) are unlexicalized, presumably owing to the aforementioned reasons. Unfortunately, as shown in previous work (e.g, Zhao and Ng (2007), Chen and Ng (2013)), the use of lexical features contributed significantly to the performance of state-of-the-art supervised AZP resolvers. Finally, owing to the lack of labeled data, the model parameters are learned to maximize data 778 likelihood, which may not correlate well with the desired evaluation measure (i.e., F-score). Hence, while unsupervised resolvers have achieved stateof-the-art performance, these weaknesses together suggest that it is very challenging to scale these models up so that they can achieve the next level of performance. Our goal in this paper is to improve the state of the art in AZP resolution. Motivated by the aforementioned weaknesses, we propose a novel approach to AZP resolution using deep neural networks, which we believe has three key advantages over competing unsupervised counterparts. First, deep neural networks are particularly good at discovering hidden structures from the input data and learning task-specific representations via successive transformations of the input vectors, where different layers of a network correspond to different levels of abstractions that are useful for the target task. For the task of AZP resolution, this is desirable. Traditionally, it is difficult to correctly resolve an AZP if its context is lexically different from its antecedent's context. This is especially the case for unsupervised resolvers. In contrast, a deep network can handle difficult cases like this via learning representations that make lexically different contexts look similar. Second, we train our deep network in a supervised manner.1 In particular, motivated by recent successes of applying the mention-ranking model (Denis and Baldridge, 2008) to entity coreference resolution (e.g., Chang et al. (2013), Durrett and Klein (2013), Clark and Manning (2015), Martschat and Strube (2015), Wiseman et al. (2015)), we propose to employ a ranking-based deep network, which is trained to assign the highest probability to the correct antecedent of an AZP given a set of candidate antecedents. This contrasts with existing supervised AZP resolvers, all of which are classification-based. Optimizing this objective function is better than maximizing data likelihood, as the former is more tightly coupled with the desired evaluation metric (F-score) than the latter. Finally, given that our network is trained in a supervised manner, we can extensively employ lex1Note that deep neural networks do not necessarily have to be trained in a supervised manner. In fact, in early research on extending semantic modeling using auto-encoders (Salakhutdinov and Hinton, 2007), the networks were trained in an unsupervised manner, where the model parameters were optimized for the reconstruction of the input vectors. ical features and use them in combination with other types of features that have been shown to be useful for AZP resolution. However, rather than employing words directly as features, we employ word embeddings trained in an unsupervised manner. The goal of the deep network will then be to take these task-independent word embeddings as input and convert them into embeddings that would work best for AZP resolution via supervised learning. We call our approach an embedding matching approach because the underlying deep network attempts to compare the embedding learned for an AZP with the embedding learned for each of its antecedents. To our knowledge, this is the first approach to AZP resolution based on deep networks. When evaluated on the Chinese portion of the OntoNotes 5.0 corpus, our embedding matching approach to AZP resolution outperforms the CN15 model, achieving state-of-the-art results. The rest of the paper is organized as follows. Section 2 overviews related work on zero pronoun resolution for Chinese and other languages. Section 3 describes our embedding matching approach, specifically the network architecture and the way we train and apply the network. We present our evaluation results in Section 4 and our conclusions in Section 5. 2 Related Work Chinese ZP resolution. Early approaches to Chinese ZP resolution are rule-based. Converse (2006) applied Hobbs' algorithm (Hobbs, 1978) to resolve the ZPs in the CTB documents. Yeh and Chen (2007) hand-engineered a set of rules for ZP resolution based on Centering Theory (Grosz et al., 1995). In contrast, virtually all recent approaches to this task are learning-based. Zhao and Ng (2007) are the first to employ a supervised learning approach to Chinese ZP resolution. They trained an AZP resolver by employing syntactic and positional features in combination with a decision tree learner. Unlike Zhao and Ng, Kong and Zhou (2010) employed context-sensitive convolution tree kernels (Zhou et al., 2008) in their resolver to model syntactic information. Chen and Ng (2013) extended Zhao and Ng's feature set with novel features that encode the context surrounding a ZP and its candidate antecedents, and exploited the coreference links between ZPs as bridges to 779 Figure 1: The architecture of our embedding matching model. The number in each box indicates the size of the corresponding vector. find textually distant antecedents for ZPs. As mentioned above, there have been attempts to perform unsupervised AZP resolution. For instance, using only data containing manually resolved overt pronouns, Chen and Ng (2014a) trained a supervised overt pronoun resolver and applied it to resolve AZPs. More recently, Chen and Ng (2014b; 2015) have proposed unsupervised probabilistic AZP resolution models that rivaled their supervised counterparts in performance. While we aim to resolve anaphoric ZPs, Rao et al. (2015) resolved deictic non-anaphoric ZPs, which "refer to salient entities in the environment such as the speaker, hearer or pragmatically accessible referent without requiring any introduction in the preceding text''. ZP resolution for other languages. There have been rule-based and supervised machine learning approaches for resolving ZPs in other languages. For example, to resolve ZPs in Spanish texts, Ferrández and Peral (2000) proposed a set of hand-crafted rules that encode preferences for candidate antecedents. In addition, supervised approaches have been extensively employed to resolve ZPs in Korean (e.g., Han (2006)), Japanese (e.g., Seki et al. (2002), Isozaki and Hirao (2003), Iida et al. (2006; 2007), Sasano et al. (2008), Taira et al. (2008), Imamura et al. (2009), Sasano et al. (2009), Watanabe et al. (2010), Hayashibe et al. (2011), Iida and Poesio (2011), Sasano and Kurohashi (2011), Yoshikawa et al. (2011), Hangyo et al. (2013), Yoshino et al. (2013), Iida et al. (2015)), and Italian (e.g., Iida and Poesio (2011)). 3 Model In this section, we first introduce our network architecture (Section 3.1), and then describe how we train it (Section 3.2) and apply it (Section 3.3). 3.1 Network Architecture The network architecture is shown in Figure 1. Since we employ a ranking model to rank the candidate antecedents of an AZP z, the inputs to the network are (1) a feature vector representing the AZP, and (2) n feature vectors representing its n candidate antecedents, c1, c2, . . ., cn. As will be explained in detail in Section 3.2.2, the features in each feature vector can be divided into two types: word embedding features and hand-crafted features. Each input feature vector will then be passed through three hidden layers in the network, which will successively map it into a low-dimensional feature space. The resulting vector can be viewed as the low-dimensional semantic embedding of the corresponding input vector. Finally, the model computes a matching score between z and each of its candidate antecedents based on their lowdimensional representations. These scores are then normalized into probabilities using a softmax. More formally, let xe(z) and xh(z) be the vec780 tors of embedding and hand-crafted features representing AZP z respectively, and let xe(ci) and xh(ci) be the vectors of embedding and handcrafted features representing candidate antecedent ci respectively. In addition, let y(z) and y(ci) be the (low-dimensional) output vectors for z and ci respectively, l1, l2, and l3 be the intermediate hidden layers, Wi and W ′ i be the weight matrices associated with z and the ci's in hidden layer i, bi and b′ i be the bias terms associated with z and the ci's.2 We then have: l1(z) = f(W1xe(z) + b1) l2(z) = l1(z) ⊕xh(z) l3(z) = f(W2l2(z) + b2) y(z) = f(W3l3(z) + b3) (1) l1(ci) = f(W ′ 1xe(ci) + b′ 1) l2(ci) = l1(ci) ⊕xh(ci) l3(ci) = f(W ′ 2l2(ci) + b′ 2) y(ci) = f(W ′ 3l3(z) + b′ 3) (2) where f is the activation function at output layer y and hidden layers l1 and l3. In this network, we employ tanh as the activation function. Hence, f(x) = tanh(x) = 1 −e−2x 1 + e−2x (3) The matching score between an AZP z and a candidate antecedent ci is then measured as: R(z, ci) = cos(y(z), y(ci)) = y(z)T y(ci) ||y(z)||||y(ci)|| (4) 3.2 Training 3.2.1 Training Instance Creation We create one training instance from each AZP in each training document. Since our model is ranking-based, each training instance corresponds to an AZP z and all of its candidate antecedents Ci. In principle, we can follow previous work and assume that the set of candidate antecedents C contains all and only those maximal or modifier noun phrases (NPs) that precede z in the associated text and are at most two sentences away from it. However, to improve training efficiency, we select exactly four candidate antecedents for each 2Note that the target AZP and its candidate antecedents use different weight matrices and biases within each layer. This is needed because the features of the AZP and those of the candidate antecedents come from two different feature spaces. AZP z as follows. First, we take the closest correct antecedent z to be one of the four candidate antecedents. Next, we compute a salience score for each of its non-coreferent candidate antecedents and select the three with the highest salience scores as the remaining three candidate antecedents. We compute salience as follows. For each AZP z, we compute the salience score for each (partial) entity preceding z.3 To reduce the size of the list of preceding entities, we only consider a partial entity active if at least one of its mentions appears within two sentences of the active AZP z. We compute the salience score of each active entity w.r.t. z using the following equation: ∑ m∈E g(m) ∗decay(m) (5) where m is a mention belonging to active entity E, g(m) is a grammatical score which is set to 4, 2, or 1 depending on whether m's grammatical role is Subject, Object, or Other respectively, and decay(m) is a decay factor that is set to 0.5dis (where dis is the sentence distance between m and z). Finally, we assign the correct label (i.e., the matching score) to each candidate antecedent. The score is 1 for the correct antecedent and 0 otherwise. 3.2.2 Features As we can see from Figure 1, each input feature vector, regardless of whether it is representing an AZP or one of its candidate antecedents, is composed of two types of features, embedding features and hand-crafted features, as described below. Embedding features. To encode the lexical contexts of the AZP and its candidate antecedents, one could employ one-hot vectors. However, the resulting lexical features may suffer from sparsity. To see the reason, assuming that the vocabulary size is V and the number of neurons in the first hidden layer l1 is L1, the size of the weight matrices W1 and W ′ 1 is V ∗L1, which in our dataset is around two million while the number of training examples is much smaller. Therefore, instead of using one-hot vectors, we employ embedding features. Specifically, we employ the pre-trained word embeddings (of size 100) 3We compute the list of preceding entities automatically using SinoCoreferencer (Chen and Ng, 2014c), a Chinese entity coreference resolver downloadable from http://www. hlt.utdallas.edu/~yzcchen/coreference/. 781 Syntactic features (13) whether z is the first gap in an IP clause; whether z is the first gap in a subject-less IP clause, and if so, POS(w1); whether POS(w1) is NT; whether w1 is a verb that appears in a NP or VP; whether Pl is a NP node; whether Pr is a VP node; the phrasal label of the parent of the node containing POS(w1); whether V has a NP, VP or CP ancestor; whether C is a VP node; whether there is a VP node whose parent is an IP node in the path from w1 to C. Other features (6) whether z is the first gap in a sentence; whether z is in the headline of the text; the type of the clause in which z appears; the grammatical role of z (Subject, Object, or Other); whether w−1 is a punctuation; whether w−1 is a comma. Table 1: Hand-crafted features associated with an AZP. z is a zero pronoun. V is the VP node following z. wi is the ith word to the right of z (if i is positive) or the ith word to the left of z (if i is negative). C is lowest common ancestor of w−1 and w1. Pl and Pr are the child nodes of C that are the ancestors of w−1 and w1 respectively. Syntactic features (12) whether c has an ancestor NP, and if so, whether this NP is a descendent of c's lowest ancestor IP; whether c has an ancestor VP, and if so, whether this VP is a descendent of c's lowest ancestor IP; whether c has an ancestor CP; the grammatical role of c (Subject, Object, or Other); the clause type in which c appears; whether c is an adverbial NP, a temporal NP, a pronoun or a named entity. Distance features (4) the sentence distance between c and z; the segment distance between c and z, where segments are separated by punctuations; whether c is the closest NP to z; whether c and z are siblings in the associated parse tree. Other features (2) whether c is in the headline of the text; whether c is a subject whose governing verb is lexically identical to the verb governing of z. Table 2: Hand-crafted features associated with a candidate antecedent. z is a zero pronoun. c is a candidate antecedent of z. V is the VP node following z in the parse tree. obtained by training word2vec4 on the Chinese portion of the training data from the OntoNotes 5.0 corpus. For an AZP z, we first find the word preceding it and its governing verb, and then concatenate the embeddings of these two words to form the AZP's embedding features. (If z happens to begin a sentence, we use a special embedding to represent the word preceding it.) For a candidate antecedent, we employ the word embedding of its head word as its embedding features. Hand-crafted features. The hand-crafted features are (low-dimensional) features that capture the syntactic, positional and other relationships between an AZP and its candidate antecedents. These features are similar to the ones employed in previous work on AZP resolution (e.g., Zhao and Ng (2007), Kong and Zhou (2010), Chen and Ng (2013)). We split these hand-crafted features into two disjoint sets: those associated with an AZP and those associated with a candidate antecedent. If a feature is computed based on the AZP, then we regard it as a feature associated with the AZP; otherwise, we put it in the other feature set. A brief description of the hand-crafted features associated with an AZP and those associated with a candidate antecedent are shown in Table 1 and Table 2 respectively. Note that we convert each multi-valued feature into a corresponding set of binary-valued features (i.e., if a feature has N different values, 4https://code.google.com/p/word2vec/ we will create N binary indicators to represent it). To ensure that the number of hand-crafted features representing an AZP is equal to the number of hand-crafted features representing a candidate antecedent5, we append to the end of a feature vector as many dummy zeroes as needed.6 3.2.3 Parameter Estimation We employ online learning to train the network, with one training example in a mini-batch. In other words, we update the weights after processing each training example based on the correct matching scores of the training example (which is 1 for the correct antecedent and 0 otherwise) and the network's predicted matching scores. To compute the predicted matching score between AZP z and one of its candidate antecedents ci, we apply the following softmax function: P(ci|z, Λ) = exp(γR(z, ci)) ∑ c′∈C exp(γR(z, c′)) (6) where (1) γ is a smoothing factor that is empirically set on a held-out data set, (2) R(z, ci) is the cosine similarity between vector y(z) and vector y(ci) (see Section 3.1), (3) C denotes the set of candidate antecedents of z, and (4) Λ denotes the set of parameters of our neural network: 5As seen in Figure 1, we set the length of the vector to 50. 6Appending dummy 0s is solely for the convenience of the network implementation: doing so does not have any effect on any computation. 782 Λ = {W1, W2, W3, b1, b2, b3, W ′ 1, W ′ 2, W ′ 3, b′ 1, b′ 2, b′ 3} (7) To maximize the matching score of the correct antecedent, we estimate the model parameters to minimize the following loss function: Jz(Λ) = − ∑ ci∈C δ(z, ci)P(ci|z, Λ) (8) where δ(z, ci) is an indicator function indicating whether AZP z and candidate antecedent ci are coreferent: δ(z, ci) = {1, if z and ci are coreferent 0, otherwise (9) Since Jz(Λ) is differentiable w.r.t. to Λ, we train the model using stochastic gradient descent. Specifically, the model parameters Λ are updated according to the following update rule: Λt = Λt−1 −α∂J (Λt−1) ∂Λt−1 (10) where α is the learning rate, and Λt and Λt−1 are model parameters at the tth iteration and the (t −1)th iteration respectively. To avoid overfitting, we determine the hyperparameters of the network using a held-out development set. 3.3 Inference After training, we can apply the resulting network to find an antecedent for each AZP. Each test instance corresponds to an AZP z and four of its candidate antecedents. Specifically, the four candidate antecedents with the highest salience scores will be chosen. Importantly, unlike in training, where we guarantee that the correct antecedents is among the set of candidate antecedents, in testing, we don't. We use the network to rank the candidate antecedents by computing the posterior probability of each of them being a correct antecedent of z, and select the one with the highest probability to be its antecedent. The aforementioned resolution procedure can be improved, however. The improvement is motivated by a problem we observed previously (Chen and Ng, 2013): an AZP and its closest antecedent can sometimes be far away from each other, thus making it difficult to correctly resolve the AZP. To address this problem, we employ the following resolution procedure in our experiments. Given a test document, we process its AZPs in a left-to-right Training Test Documents 1,391 172 Sentences 36,487 6,083 Words 756,063 110,034 AZPs 12,111 1,713 Table 3: Statistics on the training and test sets. manner. As soon as we resolve an AZP to a preceding NP c, we fill the corresponding AZP's gap with c. Hence, when we process an AZP z, all of its preceding AZPs in the associated text have been resolved, with their gaps filled by the NPs they are resolved to. To resolve z, we create test instances between z and its four most salient candidate antecedents in the same way as described before. The only difference is that the set of candidate antecedents of z may now include those NPs that are used to fill the gaps of the AZPs resolved so far. Some of these additional candidate antecedents are closer to z than the original candidate antecedents, thereby facilitating the resolution of z. If the model resolves z to the additional candidate antecedent that fills the gap left behind by, say, AZP z′, we postprocess the output by resolving z to the NP that z′ is resolved to.7 4 Evaluation 4.1 Experimental Setup Datasets. We employ the Chinese portion of the OntoNotes 5.0 corpus that was used in the official CoNLL-2012 shared task (Pradhan et al., 2012). In the CoNLL-2012 data, the training set and the development set contain ZP coreference annotations, but the test set does not. Therefore, we train our models on the training set and perform evaluation on the development set. Statistics on the datasets are shown in Table 3. The documents in these datasets come from six sources, namely Broadcast News (BN), Newswire (NW), Broadcast Conversation (BC), Telephone Conversation (TC), Web Blog (WB) and Magazine (MZ). Evaluation measures. Following previous work on AZP resolution (e.g., Zhao and Ng (2007), Chen and Ng (2013)), we express the results of AZP resolution in terms of recall (R), precision (P) and F-score (F). We report the scores for each source in addition to the overall score. 7This postprocessing step is needed because the additional candidate antecedents are only gap fillers. 783 Number of embedding features for a word 100 Number of hand-crafted features 50 Number of neurons in l1 100 Number of neurons in l3 75 Number of neurons in y 50 Number of epochs over the training data 100 Smoothing factor γ 20 Learning rate α 0.01 Table 4: Hyperparameter values. Hyperparameter tuning. We reserve 20% of the training set for tuning hyperparameters. The tuned hyperparameter values are shown in Table 4. Evaluation settings. Following Chen and Ng (2013), we evaluate our model in three settings. In Setting 1, we assume the availability of gold syntactic parse trees and gold AZPs. In Setting 2, we employ gold syntactic parse trees and system (i.e., automatically identified) AZPs. Finally, in Setting 3, we employ system syntactic parse trees and system AZPs. The gold and system syntactic parse trees, as well as the gold AZPs, are obtained from the CoNLL-2012 shared task dataset, while the system AZPs are identified by a learning-based AZP identifier described in the Appendix. Baseline system. As our baseline, we employ Chen and Ng's (2015) system, which has achieved the best result on our test set. 4.2 Results and Discussion Results of the baseline system and our model on entire test set are shown in row 1 of Table 5. The three major columns in the table show the results obtained in the three settings. As we can see, our model outperforms the baseline significantly by 2.0%, 1.8%, and 1.1% in F-score under Settings 1, 2, and 3, respectively.8 Rows 2−7 of Table 5 show the resolution results on each of the six sources. As we can see, in Setting 1, our model beats the baseline on all six sources in F-score: by 2.4% (NW), 2.5% (MZ), 4.5% (WB), 1.6% (BN), 1.4% (BC), and 0.4% (TC). All the improvements are significant except for TC. These results suggest that our approach works well across different sources. In Setting 2, our model outperforms the baseline on all sources except NW and BC, where the F-scores drop insignificantly by 0.1% for both sources. Finally, in Setting 3, our model outperforms the baseline on all sources except NW and TC, where F-scores 8All significance tests are paired t-tests, with p < 0.05. drop significantly by 0.7% for NW and 1.1% for TC. Given the challenges in applying supervised learning (in particular, the difficulty and time involved in training the deep neural network as well as the time and effort involved in manually annotating the data needed to train the network), one may wonder whether the small though statistically significant improvements in these results provide sufficient justification for going back to supervised learning from the previous state-of-the-art unsupervised model. We believe that this is the beginning, not the end, of applying deep neural networks for AZP resolution. In particular, there is a lot of room for improvements, which may involve incorporating more sophisticated features and improving the design of the network (e.g., the dimensionality of the intermediate representations, the number of hidden layers, the objective function), for instance. 4.3 Ablation Results Recall that the input of our model is composed of two groups of features, embedding features and hand-crafted features. To investigate the contribution of each of these two feature groups, we conduct ablation experiments. Specifically, in each ablation experiment, we retrain the network using only one group of features. Ablation results under the three settings are shown in Table 6. In Setting 1, when the handcrafted features are ablated, F-score drops significantly by 12.2%. We attribute the drop to the fact that the syntactic, positional, and other relationships encoded in the hand-crafted features play an important role in resolving AZPs. When the embedding features are ablated, F-score drops significantly by 3.7%. This result suggest the effectiveness of the embedding features. Similar trends can be observed w.r.t. the other two settings: in Setting 2, F-score drops significantly by 6.8% and 2.2% when the hand-crafted features and the embedding features are ablated respectively, while in Setting 3, F-score drops significantly by 4.6% and 1.1% when the hand-crafted features and the embedding features are ablated. 4.4 Learning Curve We show in Figure 2 the learning curve of the our model obtained under Setting 1. As we can see, after the first epoch, the F-score on the entire test set is around 46%, and it gradually increases to 784 Setting 1: Setting 2: Setting 3: Gold Parses, Gold AZPs Gold Parses, System AZPs System Parses, System AZPs Baseline Our Model Baseline Our Model Baseline Our Model Source R P F R P F R P F R P F R P F R P F Overall 50.0 50.4 50.2 51.8 52.5 52.2 35.7 26.2 30.3 39.6 27.0 32.1 19.6 15.5 17.3 21.9 15.8 18.4 NW 46.4 46.4 46.4 48.8 48.8 48.8 32.1 28.1 30.0 34.5 26.4 29.9 11.9 14.3 13.0 11.9 12.8 12.3 MZ 38.9 39.1 39.0 41.4 41.6 41.5 29.6 19.6 23.6 34.0 22.4 27.0 4.9 4.7 4.8 9.3 7.3 8.2 WB 51.8 51.8 51.8 56.3 56.3 56.3 39.1 22.9 28.9 44.7 25.1 32.2 20.1 14.3 16.7 23.9 16.1 19.2 BN 53.8 53.8 53.8 55.4 55.4 55.4 30.8 30.7 30.7 36.9 31.9 34.2 18.2 22.3 20.0 22.1 23.2 22.6 BC 49.2 49.6 49.4 50.4 51.3 50.8 35.9 26.6 30.6 37.6 25.6 30.5 19.4 14.6 16.7 21.2 14.6 17.3 TC 51.9 53.5 52.7 51.9 54.2 53.1 43.5 28.7 34.6 46.3 29.0 35.6 31.8 17.0 22.2 31.4 15.9 21.1 Table 5: AZP resolution results of the baseline and our model on the test set. Setting 1: Setting 2: Setting 3: Gold Parses Gold Parses System Parses Gold AZPs System AZPs System AZPs System R P F R P F R P F Full system 51.8 52.5 52.2 39.6 27.0 32.1 21.9 15.8 18.4 Embedding features only 39.2 40.8 40.0 30.9 21.5 25.3 16.3 12.0 13.8 Hand-crafted features only 48.2 48.7 48.5 37.0 25.1 29.9 20.6 14.9 17.3 Table 6: Ablation results of AZP resolution on the whole test set. Figure 2: The learning curve of our model on the entire test set under Setting 1. 52% in the 80th epoch when performance starts to plateau. These results provide suggestive evidence for our earlier hypothesis that our objective function (Equation (8)) is tightly coupled with the desired evaluation metric (F-score). 4.5 Analysis of Results To gain additional insights into our approach, we examine the outputs of our model obtained under Setting 1. We first analyze the cases where the AZP was correctly resolved by our model but incorrectly resolved by the baseline. Consider the following representative example with the corresponding English translation. [陈水扁] 在登机前发表简短谈话时表示,[台 湾] 要站起来走出去。... ∗pro∗也希望此行能 把国际友谊带回来。 [Chen Shui-bian] delivered a short speech before boarding, saying that [Taiwan] should stand up and go out. ... ∗pro∗also hopes that this trip can bring back international friendship. In this example, the correct antecedent of the AZP is 陈水扁(Chen Shui-bian). However, the baseline incorrectly resolves it to 台湾(Taiwan). The baseline's mistake can be attributed to the facts that (1) 台湾is the most salient candidate antecedent in the discourse, and (2) 台湾is closer to the AZP than the correct antecedent 陈水扁. Nevertheless, our model still correctly identifies 陈水 扁as the AZP's antecedent because of the embedding features. A closer inspection of the training data reveals that although the word 陈水扁never appeared as the antecedent of an AZP whose governing verb is 希望(hope) in the training data, many AZPs that are governed by 希望are coreferent with other person names. Because the word 陈 水扁has a similar word embedding as those person names, our approach successfully generalizes such lexical context and makes the right resolution decision. Next, we examine the errors made by our model and find that the majority of the mistakes result from insufficient lexical contexts. Currently, to encode the lexical contexts, we only consider the word preceding the AZP and its governing verb, as well as the head word of the candidate antecedent. However, this encoding ignored a lot of potentially useful context information, such as the clause following the AZP, the modifier of the candidate antecedent and the clause containing the candidate antecedent. Consider the following example: 785 [我] 前一会精神上太紧张。...∗pro∗现在比较 平静了。 [I] was too nervous a while ago. ... ∗pro∗am now calmer. To resolve the AZP to its correct antecedent 我 (I), one needs to compare the two clauses containing the AZP and 我. However, since our model does not encode a candidate antecedent's context, it does not resolve the AZP correctly. One way to address this problem would be to employ sentence embeddings to represent the clauses containing the AZP and its candidate antecedents, and then perform sentence embedding matching to resolve the AZP. The primary challenge concerns how to train the model to match two clauses with probably no overlapping words and with a limited number of training examples. 5 Conclusions We proposed an embedding matching approach to zero pronoun resolution based on deep networks. To our knowledge, this is the first neural networkbased approach to zero pronoun resolution. When evaluated on the Chinese portion of the OntoNotes corpus, our approach achieved state-of-the-art results. Acknowledgments We thank the three anonymous reviewers for their detailed comments. This work was supported in part by NSF Grants IIS-1219142 and IIS-1528037. Any opinions, findings, conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views or official policies, either expressed or implied, of NSF. References Kai-Wei Chang, Rajhans Samdani, and Dan Roth. 2013. A constrained latent variable model for coreference resolution. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 601--612. Chen Chen and Vincent Ng. 2013. Chinese zero pronoun resolution: Some recent advances. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1360--1365. Chen Chen and Vincent Ng. 2014a. Chinese zero pronoun resolution: An unsupervised approach combining ranking and integer linear programming. In Proceedings of the 28th AAAI Conference on Artificial Intelligence, pages 1622--1628. Chen Chen and Vincent Ng. 2014b. Chinese zero pronoun resolution: An unsupervised probabilistic model rivaling supervised resolvers. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 763--774. Chen Chen and Vincent Ng. 2014c. SinoCoreferencer: An end-to-end Chinese event coreference resolver. In Proceedings of the Ninth International Conference on Language Resources and Evaluation, pages 4532--4538. Chen Chen and Vincent Ng. 2015. Chinese zero pronoun resolution: A joint unsupervised discourseaware model rivaling state-of-the-art resolvers. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 320-326. Kevin Clark and Christopher D. Manning. 2015. Entity-centric coreference resolution with model stacking. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1405--1415. Susan Converse. 2006. Pronominal Anaphora Resolution in Chinese. Ph.D. thesis, University of Pennsylvania. Pascal Denis and Jason Baldridge. 2008. Specialized models and ranking for coreference resolution. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 660--669. Greg Durrett and Dan Klein. 2013. Easy victories and uphill battles in coreference resolution. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1971--1982. Antonio Ferrández and Jesús Peral. 2000. A computational approach to zero-pronouns in Spanish. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, pages 166--172. Barbara J. Grosz, Aravind K. Joshi, and Scott Weinstein. 1995. Centering: A framework for modeling the local coherence of discourse. Computational Linguistics, 21(2):203--226. Na-Rae Han. 2006. Korean zero pronouns: Analysis and resolution. Ph.D. thesis, University of Pennsylvania. Masatsugu Hangyo, Daisuke Kawahara, and Sadao Kurohashi. 2013. Japanese zero reference resolution considering exophora and author/reader mentions. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 924--934. 786 Yuta Hayashibe, Mamoru Komachi, and Yuji Matsumoto. 2011. Japanese predicate argument structure analysis exploiting argument position and type. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 201--209. Jerry Hobbs. 1978. Resolving pronoun references. Lingua, 44:311--338. Ryu Iida and Massimo Poesio. 2011. A cross-lingual ILP solution to zero anaphora resolution. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, pages 804--813. Ryu Iida, Kentaro Inui, and Yuji Matsumoto. 2006. Exploting syntactic patterns as clues in zero-anaphora resolution. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 625--632. Ryu Iida, Kentaro Inui, and Yuji Matsumoto. 2007. Zero-anaphora resolution by learning rich syntactic pattern features. ACM Transactions on Asian Language Information Processing, 6(4). Ryu Iida, Kentaro Torisawa, Chikara Hashimoto, JongHoon Oh, and Julien Kloetzer. 2015. Intrasentential zero anaphora resolution using subject sharing recognition. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2179--2189. Kenji Imamura, Kuniko Saito, and Tomoko Izumi. 2009. Discriminative approach to predicateargument structure analysis with zero-anaphora resolution. In Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, pages 85--88. Hideki Isozaki and Tsutomu Hirao. 2003. Japanese zero pronoun resolution based on ranking rules and machine learning. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, pages 184--191. Thorsten Joachims. 1999. Making large-scale SVM learning practical. In Bernhard Scholkopf and Alexander Smola, editors, Advances in Kernel Methods - Support Vector Learning, pages 44--56. MIT Press. Fang Kong and GuoDong Zhou. 2010. A tree kernelbased unified framework for Chinese zero anaphora resolution. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 882--891. Sebastian Martschat and Michael Strube. 2015. Latent structures for coreference resolution. In Transactions of the Association for Computational Linguistics, 3:405--418. Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL2012 shared task: Modeling multilingual unrestricted coreference in OntoNotes. In Proceedings of 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning: Shared Task, pages 1--40. Sudha Rao, Allyson Ettinger, Hal Daumé III, and Philip Resnik. 2015. Dialogue focus tracking for zero pronoun resolution. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 494--503. Ruslan Salakhutdinov and Geoffrey Hinton. 2007. Semantic hashing. In Proceedings of the SIGIR Workshop on Information Retrieval and Applications of Graphical Models. Ryohei Sasano and Sadao Kurohashi. 2011. A discriminative approach to Japanese zero anaphora resolution with large-scale lexicalized case frames. In Proceedings of the 5th International Joint Conference on Natural Language Processing, pages 758-766. Ryohei Sasano, Daisuke Kawahara, and Sadao Kurohashi. 2008. A fully-lexicalized probabilistic model for Japanese zero anaphora resolution. In Proceedings of the 22nd International Conference on Computational Linguistics, pages 769--776. Ryohei Sasano, Daisuke Kawahara, and Sadao Kurohashi. 2009. The effect of corpus size on case frame acquisition for discourse analysis. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 521-529. Kazuhiro Seki, Atsushi Fujii, and Tetsuya Ishikawa. 2002. A probabilistic method for analyzing Japanese anaphora integrating zero pronoun detection and resolution. In Proceedings of the 19th International Conference on Computational Linguistics - Volume 1. Hirotoshi Taira, Sanae Fujita, and Masaaki Nagata. 2008. A Japanese predicate argument structure analysis using decision lists. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 523--532. Yotaro Watanabe, Masayuki Asahara, and Yuji Matsumoto. 2010. A structured model for joint learning of argument roles and predicate senses. In Proceedings of the ACL 2010 Conference Short Papers, pages 98--102. Sam Wiseman, Alexander M. Rush, Stuart Shieber, and Jason Weston. 2015. Learning anaphoricity and antecedent ranking features for coreference resolution. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1416--1426. 787 Yaqin Yang and Nianwen Xue. 2010. Chasing the ghost: recovering empty categories in the Chinese Treebank. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 1382--1390. Ching-Long Yeh and Yi-Chun Chen. 2007. Zero anaphora resolution in Chinese with shallow parsing. Journal of Chinese Language and Computing, 17(1):41--56. Katsumasa Yoshikawa, Masayuki Asahara, and Yuji Matsumoto. 2011. Jointly extracting Japanese predicate-argument relation with Markov Logic. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 1125--1133. Koichiro Yoshino, Shinsuke Mori, and Tatsuya Kawahara. 2013. Predicate argument structure analysis using partially annotated corpora. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 957--961. Shanheng Zhao and Hwee Tou Ng. 2007. Identification and resolution of Chinese zero pronouns: A machine learning approach. In Proceedings of the 2007 Joint Conference on Empirical Methods on Natural Language Processing and Computational Natural Language Learning, pages 541--550. GuoDong Zhou, Fang Kong, and Qiaoming Zhu. 2008. Context-sensitive convolution tree kernel for pronoun resolution. In Proceedings of the 3rd International Joint Conference on Natural Language Processing, pages 25--31. Appendix: Anaphoric Zero Pronoun Identification Recall that Settings 2 and 3 in our evaluation involve the use of system AZPs. Our supervised AZP identification procedure is composed of two steps. First, in the extraction step, we heuristically extract ZPs. Then, in the classification step, we train a classifier to determine which of the ZPs extracted in the first step are AZPs. To implement the extraction step, we use Zhao and Ng's (2007) observation: ZPs can only occur before a VP node in a syntactic parse tree. However, according to Kong and Zhou (2010), ZPs do not need to be extracted from every VP: if a VP node occurs in a coordinate structure or is modified by an adverbial node, then only its parent VP node needs to be considered. We extract ZPs from all VPs that satisfy the above constraints. To implement the classification step, we train a binary classifier using SVMlight (Joachims, 1999) on the CoNLL-2012 training set to distinguish AZPs from non-AZPs. Each instance corresponds to a ZP extracted in the first step and is represented Syntactic features (13) whether z is the first gap in an IP clause; whether z is the first gap in a subject-less IP clause, and if so, POS(w1); whether POS(w1) is NT; whether t1 is a verb that appears in a NP or VP; whether Pl is a NP, QP, IP or ICP node; whether Pr is a VP node; the phrasal label of the parent of the node containing POS(t1); whether V has a NP, VP, QP or CP ancestor; whether C is a VP node; whether the parent of V is an IP node; whether V's lowest IP ancestor has (1) a VP node as its parent and (2) a VV node as its left sibling; whether there is a VP node whose parent is an IP node in the path from t1 to C. Lexical features (13) the words surrounding z and/or their POS tags, including w1, w−1, POS(w1), POS(w−1) + POS(w1), POS(w1) + POS(w2), POS(w−2) + POS(w−1), POS(w1) + POS(w2) + POS(w3), POS(w−1) + w1, and w−1 + POS(w1); whether w1 is a transitive verb, an intransitive verb or a preposition; whether w−1 is a transitive verb without an object. Other features (6) whether z is the first gap in a sentence; whether z is in the headline of the text; the type of the clause in which z appears; the grammatical role of z; whether w−1 is a punctuation; whether w−1 is a comma. Table 7: Features for AZP identification. z is a zero pronoun. V is the VP node following z. wi is the ith word to the right of z (if i is positive) or the ith word to the left of z (if i is negative). C is lowest common ancestor of w−1 and w1. Pl and Pr are the child nodes of C that are the ancestors of w−1 and w1 respectively. by 32 features, 13 of which were proposed by Zhao and Ng (2007) and 19 of which were proposed by Yang and Xue (2010). A brief description of these features can be found in Table 7. When gold parse trees are employed, the recall, precision and F-score of the AZP identifier on our test set are 75.1%, 50.1% and 60.1% respectively. Using automatic parse trees, the performance of the AZP identifier drops to 43.7% (R), 30.7% (P) and 36.1% (F). 788
2016
74
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 789–799, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Constrained Multi-Task Learning for Automated Essay Scoring Ronan Cummins ALTA Institute Computer Lab University of Cambridge Meng Zhang ALTA Institute Computer Lab University of Cambridge {rc635,mz342,ejb}@cl.cam.ac.uk Ted Briscoe ALTA Institute Computer Lab University of Cambridge Abstract Supervised machine learning models for automated essay scoring (AES) usually require substantial task-specific training data in order to make accurate predictions for a particular writing task. This limitation hinders their utility, and consequently their deployment in real-world settings. In this paper, we overcome this shortcoming using a constrained multi-task pairwisepreference learning approach that enables the data from multiple tasks to be combined effectively. Furthermore, contrary to some recent research, we show that high performance AES systems can be built with little or no task-specific training data. We perform a detailed study of our approach on a publicly available dataset in scenarios where we have varying amounts of task-specific training data and in scenarios where the number of tasks increases. 1 Introduction Automated essay scoring (AES) involves the prediction of a score (or scores) relating to the quality of an extended piece of written text (Page, 1966). With the burden involved in manually grading student texts and the increase in the number of ESL (English as a second language) learners worldwide, research into AES is increasingly seen as playing a viable role in assessment. Automating the assessment process is not only useful for educators but also for learners, as it can provide instant feedback and encourage iterative refinement of their writing. The AES task has usually been addressed using machine learning. Given a set of texts and associated gold scores, machine learning approaches aim to build models that can generalise to unseen instances. Regression (Page, 1994; Persing and Ng, 2014; Phandi et al., 2015), classification (Larkey, 1998; Rudner and Liang, 2002), and preferenceranking1 approaches (Yannakoudakis et al., 2011) have all been applied to the task. In general, machine learning models only perform well when the training and test instances are from similar distributions. However, it is usually the case that essays are written in response to prompts which are carefully designed to elicit answers according to a number of dimensions (e.g. register, topic, and genre). For example, Table 1 shows extracts from two prompts from a publicly available dataset2 that aim to elicit different genres of persuasive/argumentative responses on different topics. Most previous work on AES has either ignored the differences between essays written in response to different prompts (Yannakoudakis et al., 2011) with the aim of building general AES systems, or has built prompt-specific models for each prompt independently (Chen and He, 2013; Persing and Ng, 2014). One of the problems hindering the wide-scale adoption and deployment of AES systems is the dependence on prompt-specific training data, i.e. substantial model retraining is often needed when a new prompt is released. Therefore, systems that can adapt to new writing tasks (i.e. prompts) with relatively few new task-specific training examples are particularly appealing. For example, a system that is trained using only responses from prompt #1 in Table 1 may not generalise well to essays written in response to prompt #2, and vice versa. Even more complications arise when the scoring scale, marking criteria, and/or grade level (i.e. educational stage) vary from task 1also known as pairwise learning-to-rank 2available at https://www.kaggle.com/c/ asap-aes 789 #1 Some experts are concerned that people are spending too much time on their computers and less time exercising, enjoying nature, and interacting with family and friends. Write a letter to your local newspaper in which you state your opinion on the effects computers have on people. #2 Do you believe that certain materials, such as books, music, movies, magazines, etc., should be removed from the shelves if they are found offensive? Support your position with convincing arguments from your own experience, observations, and/or reading. Table 1: Two sample writing tasks from the ASAP (Automated Student Assessment Prize) dataset. to task. If essays written in response to different tasks are marked on different scoring scales, then the actual scores assigned to essays across tasks are not directly comparable. This effect becomes even more pronounced when prompts are aimed at students in different educational stages. In this paper, we address this problem of prompt adaptation using multi-task learning. In particular, we treat each prompt as a different task and introduce a constrained preference-ranking approach that can learn from multiple tasks even when the scoring scale and marking criteria are different across tasks. Our constrained preference-ranking approach significantly increases performance over a strong baseline system when there is limited prompt-specific training data available. Furthermore, we perform a detailed study using varying amounts of task-specific training data and varying numbers of tasks. First, we review some related work. 2 Related Work A number of commercially available systems for AES, have been developed using machine learning techniques. These include PEG (Project Essay Grade) (Page, 2003), e-Rater (Attali and Burstein, 2006), and Intelligent Essay Assessor (IEA) (Landauer et al., 1998). Beyond commercial systems, there has been much research into varying aspects involved in automated assessment, including coherence (Higgins et al., 2004; Yannakoudakis and Briscoe, 2012), prompt-relevance (Persing and Ng, 2014; Higgins et al., 2006), argumentation (Labeke et al., 2013; Somasundaran et al., 2014; Persing and Ng, 2015), grammatical error detection and correction (Rozovskaya and Roth, 2011; Felice et al., 2014), and the development of publicly available resources (Yannakoudakis et al., 2011; Dahlmeier et al., 2013; Persing and Ng, 2014; Ng et al., 2014). While most of the early commercially available systems use linear-regression models to map essay features to a score, a number of more sophisticated approaches have been developed. Preferenceranking (or pairwise learning-to-rank) has been shown to outperform regression for the AES problem (Yannakoudakis et al., 2011). However, they did not study prompt-specific models, as their models used training data originating from different prompts. We also adopt a preferenceranking approach but explicitly model prompt effects during learning. Algorithms that aim to directly maximise an evaluation metric have also been attempted. A listwise learning-to-rank approach (Chen and He, 2013) that directly optimises quadratic-weighted Kappa, a commonly used evaluation measure in AES, has also shown promising results. Using training data from natural language tasks to boost performance of related tasks, for which there is limited training data, has received much attention of late (Collobert and Weston, 2008; Duh et al., 2010; Cheng et al., 2015). However, there have been relatively few attempts to apply transfer learning to automated assessment tasks. Notwithstanding, Napoles and Callison-Burch (2015) use a multi-task approach to model differences in assessors, while Heilman and Madnani (2013) specifically focus on domain-adaptation for short answer scoring over common scales. Most relevant is the work of Phandi et al. (2015), who applied domain-adaptation to the AES task using EasyAdapt (EA) (Daume III, 2007). They showed that supplementing a Bayesian linear ridge regression model (BLRR) with data from one other source domain is beneficial when there is limited target domain data. However, it was shown that simply using the source domain data as extra training data outperformed the EA domain adaptation approach in three out of four cases. One major limitation to their approach was that in many instances the source domain and target domain pairs were from different grade levels. This means that any attempt to resolve scores to a common scale is undermined by the fact that the gold scores are not comparable across domains, as the essays were written by students of different educational levels. A further limitation is that multi-domain adapta790 tion (whereby one has access to multiple source domains) was not considered. The main difference between our work and previous work is that our model incorporates multiple source tasks and introduces a learning mechanism that enables us to combine these tasks even when the scores across tasks are not directly comparable. This has not been achieved before. This is non-trivial as it is difficult to see how this can be accomplished using a standard linear-regression approach. Furthermore, we perform the first comprehensive study of multi-task learning for AES using different training set sizes for a number of different learning scenarios. 3 Preference Ranking Model In this section, we describe our baseline AES model which is somewhat similar to that developed by Yannakoudakis et al. (2011). 3.1 Perceptron Ranking (TAPrank) We use a preference-ranking model based on a binary margin-based linear classifier (the Timed Aggregate Perceptron or TAP) (Briscoe et al., 2010). In its simplest form this Perceptron uses batch learning to learn a decision boundary for classifying an input vector xi as belonging to one of two categories. A timing-variable τ (set to 1.0 by default) controls both the learning rate and the number of epochs during training. A preferenceranking model is then built by learning to classify pairwise difference vectors, i.e. learning a weight vector w such that w(xi −xj) > δ, when essay i has a higher gold score than essay j, where δ is the one-sided margin3 (Joachims, 2002; Chapelle and Keerthi, 2010). Therefore, instead of directly learning to predict the gold score of an essay vector, the model learns a weight vector w that minimizes the misclassification of difference vectors. Given that the number of pairwise difference vectors in a moderately sized dataset can be extremely large, the training set is reduced by randomly sampling difference vectors according to a user-defined probability (Briscoe et al., 2010). In all experiments in our paper we choose this probability such that 5n difference vectors are sampled, where n is the number of training instances (essays) used. We did not tune any of the hyperparameters of the model. 3This margin is set to δ = 2.0 by default. 3.2 From Rankings to Predicted Scores As the weight vector w is optimized for pairwise ranking, a further step is needed to use the ranking model for predicting a score. In particular, for each of the n vectors in our training set, a real-scalar value is assigned according to the dotproduct of the weight vector and the training instance (i.e. w · xi), essentially giving its distance (or margin) from the zero vector. Then using the training data, we train a one-dimensional linear regression model β + ϵ to map these assignments to the gold score of each instance. Finally, to make a prediction ˆy for a test vector, we first calculate its distance from the zero vector using w · xi and map it to the scoring scale using the linear regression model ˆy = β(w ·xi)+ϵ. For brevity we denote this entire approach (a ranking and a linear regression step) to predicting the final score as TAP. 3.3 Features The set of features used for our ranking model is similar to those identified in previous work (Yannakoudakis et al., 2011; Phandi et al., 2015) and is as follows: 1. word unigrams, bigrams, and trigrams 2. POS (part-of-speech) counts 3. essay length (as the number of unique words) 4. GRs (grammatical relations) 5. max-word length and min-sentence length 6. the presence of cohesive devices 7. an estimated error rate Each essay is processed by the RASP system (Briscoe et al., 2006) with the standard tokenisation and sentence boundary detection modules. All n-grams are extracted from the tokenised sentences. The grammatical relations (GRs) are extracted from the top parse of each sentence in the essay. The presence of cohesive devices are used as features. In particular, we use four categories (i.e. addition, comparison, contrast and conclusion) which are hypothesised to measure the cohesion of a text. The error rate is estimated based on a language model using ukWaC (Ferraresi et al., 2008) which contains more than 2 billion English tokens. A trigram in an essay will be treated as an error if it 791 Details System Performance (QW-κ) Task # essays Grade Original Mean Score Human BLRR SVM TAP Level Scale Resolved (0-60) Agreement Phandi Phandi 1 1783 8 2-12 39 0.721 0.761 0.781 0.815 2 1800 10 1-6 29 0.814 0.606 0.621 0.674 3 1726 10 0-3 37 0.769 0.621 0.630 0.642 4 1772 10 0-3 29 0.851 0.742 0.749 0.789 5 1805 8 0-4 36 0.753 0.784 0.782 0.801 6 1800 10 0-4 41 0.776 0.775 0.771 0.793 7 1569 7 0-30 32 0.721 0.730 0.727 0.772 8 723 10 0-60 37 0.629 0.617 0.534 0.688 Table 2: Details of ASAP dataset and a preliminary evaluation of the performance of our TAP baseline against previous work (Phandi et al., 2015). All models used only task-specific data and 5-fold crossvalidation. Best result is in bold. is not found in the language model. Spelling errors are detected using a dictionary lookup, while a rule-based error module (Andersen et al., 2013) with rules generated from the Cambridge Learner Corpus (CLC) (Nicholls, 2003) is used to detect further errors. Finally, the unigrams, bigrams and trigrams are weighted by tf-idf (Sparck Jones, 1972), while all other features are weighted by their actual frequency in the essay. 4 Data and Preliminary Evaluation In order to compare our baseline with previous work, we use the ASAP (Automated Student Assessment Prize) public dataset. Some details of the essays for the eight tasks in the dataset are described in the Table 2. The prompts elicit responses of different genres and of different lengths. In particular, it is important to note that the prompts have different scoring scales and are associated with different grade levels (710). Furthermore, the gold scores are distributed differently even if resolved to a common 0-60 scale. In order to benchmark our baseline system against previously developed approaches (BLRR and SVM regression (Phandi et al., 2015)) which use this data, we learned task-specific models using 5-fold cross-validation within each of the eight ASAP sets and aim to predict the unresolved original score as per previous work. We present the quadratic weighted kappa (QW-κ) of the systems in Table 2.4 Our baseline preference-ranking model (TAP) outperforms previous approaches on task-specific data. It is worth noting that we did not tune either of the hyperparameters of TAP. 4The results for BLRR and SVM regression are taken directly from the original work and it is unlikely that we have used the exact same fold split. Regardless, the consistent increases mean that TAP represents a strong baseline system upon which we develop our constrained multi-task approach. 5 Multi-Task Learning For multi-task learning we use EA encoding (Daume III, 2007) extended over k tasks Tj=1..k where each essay xi is associated with one task xi ∈Tj. The transfer-learning algorithm takes a set of input vectors associated with the essays, and for each vector xi ∈RF maps it via Φ(xi) to a higher dimensional space Φ(xi) ∈R(1+k)·F . The encoding function Φ(xi) is as follows: Φ(x) = k M j=0 f(x, j) (1) where L is vector concatenation and f(x, j) is as follows: f(x, j) =      x, if j = 0 x, if x ∈Tj 0F , otherwise (2) Essentially, the encoding makes a task-specific copy of the original feature space of dimensionality F to ensure that there is one sharedrepresentation and one task-specific representation for each input vector (with a zero vector for all other tasks). This approach can be seen as a reencoding of the input vectors and can be used with any vector-based learning algorithm. Fig. 1 (left) shows an example of the extended feature vectors for three tasks Tj on different scoring scales. Using only the shared-representation (in blue) as input vectors to a learning algorithm results in a standard approach which does not learn task-specific characteristics. However, using the full representation allows the learning algorithm to capture both general and task-specific characteristics jointly. This simple encoding technique is easy to implement and has been shown to be useful for a number of NLP tasks (Daume III, 2007). 792 x1 x1 0 0 x2 x2 0 0 x3 x3 0 0 x4 0 0 x4 x5 0 0 x5 x6 0 x6 0 x7 0 x7 0 w0 w1 w2 w3 01 03 00 45 60 20 10 T1 T2 T3 F0 F1 F2 F3 original score yi shared representation task-specific representations Φ(x4) Φ(x5) Φ(x5) −Φ(x4) train ranking model w wΦ(x1) wΦ(x2) train linear regressor β1 wΦ(x4) wΦ(x5) train linear regressor β2 wΦ(x6) wΦ(x7) train linear regressor β3 Figure 1: Example of the constrained multi-task learning approach for three tasks where the shared representation is in blue and the task-specific representations are in orange, red, and green. The original gold scores for each task Tj are on different scoring scales. The preference-ranking weight vector w to be learned is shown at the bottom. A one-dimensional linear regression model is learned for each task. 5.1 Constrained Preference-Ranking Given essays from multiple tasks, it is often the case that the gold scores have different distributions, are not on the same scale, and have been marked using different criteria. Therefore, we introduce a modification to TAP (called cTAPrank) that constrains the creation of pairwise difference vectors when training the weight vector w. In particular, during training we ensure that pairwise difference vectors are not created from pairs of essays originating from different tasks.5 We ensure that the same number of difference vectors are sampled during training for both TAPrank and our constrained version (i.e. both models use the same number of training instances). Figure 1 shows an example of the creation of a valid pairwisedifference vector in the multi-task framework. Furthermore, for cTAPrank we train a final linear regression step on each of the task-specific training data separately. Therefore, we predict a score y for essay xi for task Tj as ˆy = βj(w · xi) + ϵj. This is because for cTAPrank we assume that scores across tasks are not necessarily comparable. Therefore, although we utilise information originating from different tasks, the approach never mixes or directly compares instances originating from different tasks. This approach to predicting the final score is denoted cTAP. 5The same effect can be achieved in SVMrank by encoding the prompt/task using the query id (qid). This constraint is analogous to the way SVMrank is used in information retrieval where document relevance scores returned from different queries are not comparable. 6 Experimental Set-up In this section, we outline the different learning scenarios, data folds, and evaluation metrics used in our main experiments. 6.1 Learning Approaches We use the same features outlined in Section 3.3 to encode feature vectors for our learning approaches. In particular we study three learning approaches denoted and summarised as follows: TAP: which uses the TAPrank algorithm with input vectors xi of dimensionality F. MTL-TAP: which uses the TAPrank algorithm with MTL extended input vectors Φ(xi). MTL-cTAP: which uses the cTAPrank algorithm with MTL extended input vectors Φ(xi).6 For TAP and MTL-TAP, we attempt to resolve the essay score to a common scale (0-60) and subsequently train and test using this resolved scale. We then convert the score back to the original prompt-specific scale for evaluation. This is the approach used by the work most similar to ours (Phandi et al., 2015). It is worth noting that the resolution of scores to a common scale prior to training is necessary for both TAP and MTL-TAP when using data from multiple ASAP prompts. However, this step is not required for MTL-cTAP as this algorithm learns a ranking function w without directly comparing essays from different sets during training. Furthermore, the final regres6In the standard learning scenario when only target task data is available, MTL-TAP and MTL-cTAP are identical. 793 Target Task/Prompts System 1 2 3 4 5 6 7 8 Tgt-TAP 0.830 0.728 0.717 0.842 0.851 0.811 0.790 0.730 Src-TAP 0.779 0.663 0.703 0.735 0.789 0.688 0.616 0.625 Src-MTL-TAP 0.824‡ 0.683† 0.728‡ 0.771‡ 0.829‡ 0.699 0.737‡ 0.575 Src-MTL-cTAP 0.826‡ 0.698‡⋆ ⋆ 0.729‡ 0.773‡⋆ 0.827‡ 0.702†⋆ 0.744‡⋆ ⋆ 0.589⋆ ⋆ All-TAP 0.806 0.652 0.702 0.805 0.814 0.802 0.728 0.629 All-MTL-TAP 0.831‡ 0.722‡ 0.728‡ 0.823‡ 0.849‡ 0.808 0.783‡ 0.680‡ All-MTL-cTAP 0.832‡ 0.731‡⋆ 0.729‡⋆ 0.840‡⋆ ⋆ 0.852‡⋆ 0.810† 0.802‡⋆ ⋆ 0.717‡⋆ ⋆ Table 3: Average Spearman ρ of systems over two-folds on the ASAP dataset. The best approach per prompt is in bold. ‡ (†) means that ρ is statistically greater than Src-TAP (top half) and All-TAP (bottom half) using the Steiger test at the 0.05 level (‡ means significant for both folds, † means for one of the folds), while ⋆ ⋆means statistically greater than All-MTL-TAP on both folds (⋆for one fold). Target Tasks/Prompts System 1 2 3 4 5 6 7 8 Tgt-TAP 0.813 0.667 0.626 0.779 0.789 0.763 0.758 0.665 All-TAP 0.803 0.598 0.583 0.648 0.747 0.741 0.674 0.462 All-MTL-TAP 0.825‡ 0.658‡ 0.643‡ 0.702‡ 0.784‡ 0.759‡ 0.778‡ 0.692‡ All-MTL-cTAP 0.816‡ 0.667‡⋆ 0.654‡⋆ ⋆ 0.783‡⋆ ⋆ 0.801‡⋆ ⋆ 0.778‡⋆ ⋆ 0.787‡⋆ 0.692‡ Table 4: Average QW-κ of systems over two-folds on the ASAP dataset. The best approach per prompt is in bold. ‡ (†) means that κ is statistically (p < 0.05) greater than All-TAP using an approximate randomisation test (Yeh, 2000) using 50,000 samples. ⋆ ⋆means statistically greater than All-MTL-TAP on both folds (⋆for one fold). sion step in cTAP only uses original target task scores and therefore predicts scores on the correct scoring scale for the task. We study the three different learning approaches, TAP, MTL-TAP, and MTL-cTAP, in the following scenarios: All: where the approach uses data from both the target task and the available source tasks. Tgt: where the approach uses data from the target task only. Src: where the approach uses data from only the available source tasks. 6.2 Data Folds For our main experiments we divide the essays associated with each of the eight tasks into two folds. For all subsequent experiments, we train using data in one fold (often associated with multiple tasks) and test on data in the remaining fold of the specific target task. We report results for each task separately. These splits allow us to perform studies of all three learning approaches (TAP, MTLTAP, and MTL-cTAP) using varying amounts of source and target task training data. 6.3 Evaluation Metrics We use both Spearman’s ρ correlation and Quadratic-weighted κ (QW-κ) to evaluate the performance of all approaches. Spearman’s ρ measures the quality of the ranking of predicted scores produced by the system (i.e. the output from the ranking-preference model). We calculate Spearman’s ρ using the ordinal gold score and the realvalued prediction on the original prompt-specific scoring scale of each prompt. Statistical significant differences between two correlations sharing one dependent variable (i.e. the gold scores) can be determined using Steiger’s (1980) test. QW-κ measures the chance corrected agreement between the predicted scores and the gold scores. QW-κ can be viewed as a measure of accuracy as it is lower when the predicted scores are further away from the gold scores. This metric measures both the quality of the ranking of scores and the quality of the linear regression step of our approach. These metrics are complementary as they measure different aspects of performance. We calculate QW-κ using the ordinal gold score and the real-valued prediction rounded to the nearest score on the original prompt-specific scale (see Table 2). 794 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 4 8 16 32 64 128 256 512 1024 QW-kappa # of target task training essays Task/Prompt 1 All-TAP Tgt-TAP All-MTL-TAP All-MTL-cTAP 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 4 8 16 32 64 128 256 512 1024 QW-kappa # of target task training essays Task/Prompt 2 All-TAP Tgt-TAP All-MTL-TAP All-MTL-cTAP 0.1 0.2 0.3 0.4 0.5 0.6 0.7 4 8 16 32 64 128 256 512 1024 QW-kappa # of target task training essays Task/Prompt 3 All-TAP Tgt-TAP All-MTL-TAP All-MTL-cTAP 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 4 8 16 32 64 128 256 512 1024 QW-kappa # of target task training essays Task/Prompt 4 All-TAP Tgt-TAP All-MTL-TAP All-MTL-cTAP 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 4 8 16 32 64 128 256 512 1024 QW-kappa # of target task training essays Task/Prompt 5 All-TAP Tgt-TAP All-MTL-TAP All-MTL-cTAP 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 4 8 16 32 64 128 256 512 1024 QW-kappa # of target task training essays Task/Prompt 6 All-TAP Tgt-TAP All-MTL-TAP All-MTL-cTAP 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 4 8 16 32 64 128 256 512 1024 QW-kappa # of target task training essays Task/Prompt 7 All-TAP Tgt-TAP All-MTL-TAP All-MTL-cTAP 0.1 0.2 0.3 0.4 0.5 0.6 0.7 2 4 8 16 32 64 128 256 512 QW-kappa # of target task training essays Task/Prompt 8 All-TAP Tgt-TAP All-MTL-TAP All-MTL-cTAP Figure 2: Average QW-κ over two folds for all tasks as size of target task training data increases 7 Results and Discussion Table 3 and Table 4 show the performance of a number of models for both ρ and κ respectively. In general, we see that the MTL versions nearly always outperform the baseline TAP when using the same training data. This shows that multitask learning is superior to simply using the source tasks as extra training data for the AES task. Interestingly this has not been shown before. Furthermore, the MTL-cTAP approach tends to be significantly better than the other for many prompts under varying scenarios for both Spearman’s ρ and QW-κ. This shows that models that attempt to directly compare essays scores across certain writing-tasks lead to poorer performance. When looking at Spearman’s ρ in Table 3 we see that the models that do not use any target task data during training (Src) can achieve a performance which is close to the baseline that only uses all of the available target data (Tgt-TAP). This indicates that our system can rank essays well without any target task data. However, it is worth noting that without any target task training data and lacking any prior information as to the distribution of gold scores for the target task, achieving a consistently high accuracy (i.e. QW-κ) is extremely difficult (if not impossible). Therefore, Table 4 only shows results for models that make use of target task data. For the models trained with data from all eight tasks, we can see that All-MTL-cTAP outperforms both All-TAP and All-MTL-TAP on most of the tasks for both evaluation metrics (ρ and κ). Interestingly, All-MTL-cTAP also outperforms TgtTAP on most of the prompts for both evaluation metrics. This indicates that All-MTL-cTAP manages to successfully incorporate useful information from the source tasks even when there is ample target-task data. We next look at scenarios when target-task training data is lacking. 7.1 Study of Target-Task Training Size In real-world scenarios, it is often the case that we lack training data for a new writing task. We now report the results of an experiment that uses varying amounts of target-task training data. In particular, we use all source tasks and initially a small sample of task-specific data for each task (every 128th target essay) and measure the performance of Tgt-TAP and the All-* models. We then double the amount of target-task training data used (by using every 64th essay) and again measure performance, repeating this process until all target-task data is used. Figure 2 shows the performance of Tgt-TAP and the All-* models as target-task data increases. In particular, Figure 2 shows that All-MTLcTAP consistently outperforms all approaches in terms of agreement (QW-κ) and is particularly superior when there is very little target-task training data. It is worth remembering that All-MTL-cTAP only uses the target-task training instances for the final linear regression step. These results indicate that because the preference-ranking model performs so well, only a few target-task training instances are needed for the linear-regression step of All-MTL-cTAP. On the other hand, All-MTL-TAP uses all of the training instances in its final linear regression step, and performs significantly worse on a number of prompts. Again this shows the strengths of the constrained multi-task approach. 7.2 Study of Number of Source-tasks All previous experiments that used source task data used the entire seven additional tasks. We 795 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 1 2 3 4 5 6 7 8 QW-kappa source data added cumulatively Task/Prompt 1 All-TAP All-MTL-TAP All-MTL-cTAP 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 2 3 4 5 6 7 8 1 QW-kappa source data added cumulatively Task/Prompt 2 All-TAP All-MTL-TAP All-MTL-cTAP 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 3 4 5 6 7 8 1 2 QW-kappa source data added cumulatively Task/Prompt 3 All-TAP All-MTL-TAP All-MTL-cTAP 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 4 5 6 7 8 1 2 3 QW-kappa source data added cumulatively Task/Prompt 4 All-TAP All-MTL-TAP All-MTL-cTAP 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 5 6 7 8 1 2 3 4 QW-kappa source data added cumulatively Task/Prompt 5 All-TAP All-MTL-TAP All-MTL-cTAP 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 6 7 8 1 2 3 4 5 QW-kappa source data added cumulatively Task/Prompt 6 All-TAP All-MTL-TAP All-MTL-cTAP 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 7 8 1 2 3 4 5 6 QW-kappa source data added cumulatively Task/Prompt 7 All-TAP All-MTL-TAP All-MTL-cTAP 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 8 1 2 3 4 5 6 7 QW-kappa source data added cumulatively Task/Prompt 8 All-TAP All-MTL-TAP All-MTL-cTAP Figure 3: Average QW-κ over two folds as number of source tasks increases (using 25 target task instances) now study the performance of the approaches as the number of source tasks changes. In particular, we limit the number of target task training instances to 25 and cumulatively add entire source task data in the order in which they occur in Table 2, starting with the source task appearing directly after the target task. We then measure performance at each stage. At the end of the process, each approach has access to all source tasks and the limited target task data. Figure 3 shows the QW-κ for each prompt as the number of source tasks increases. We can see that All-TAP is the worst performing approach and often decreases as certain tasks are added as training data. All-MTL-cTAP is the best performing approach for nearly all prompts. Furthermore, AllMTL-cTAP is more robust than other approaches, as it rarely decreases in performance as the number of tasks increases. 8 Qualitative Analysis As an indication of the type of interpretable information contained in the task-specific representations of the All-MTL-cTAP model, we examined the shared representation and two taskspecific representations that relate to the example tasks outlined in Table 1. Table 5 shows the top weighted lexical features (i.e. unigrams, bigrams, or trigrams) (and their respective weights) in different parts of the All-MTL-cTAP model. In general, we can see that the task-specific lexical components of the model capture topical aspects of the tasks and enable domain adaptation to occur. For example, we can see that books, materials, and censorship are highly discriminative lexical features for ranking essays written in response to task #2. The shared representation contains highly weighted lexical features across all tasks and captures vocabulary items useful for ranking in general. While this analysis gives us some insight into our model, it is more difficult to interpret the weights of other feature types (e.g. POS, GRs) across different parts of the model. We leave further analysis of our approach to future work. 9 Discussion and Conclusion Unlike previous work (Phandi et al., 2015) we have shown, for the first time, that MTL outperforms an approach of simply using source task data as extra training data. This is because our approach uses information from multiple tasks without directly relying on the comparability of gold scores across tasks. Furthermore, it was concluded in previous work that at least some target-task training data is necessary to build high performing AES systems. However, as seen in Table 3, high performance rankers (ρ) can be built without any target-task data. Nevertheless, it is worth noting that without any target-data, accurately predicting the actual score (high κ) is extremely difficult. Therefore, although some extra information (i.e. the expected distribution of gold scores) would need to be used to produce accurate scores with a high quality ranker, the ranking is still useful for assessment in a number of scenarios (e.g. grading on a curve where the distribution of student scores is predefined). The main approach adopted in this paper is quite similar to using SVMrank (Joachims, 2002) while encoding the prompt id as the qid. When combined with a multi-task learning technique this allows the preference-ranking algorithm to learn 796 Shared Task #1 Task #2 2.024 offensive 1.146 this 2.027 offensive 1.852 hydrogen 0.985 less 1.229 books 1.641 hibiscus 0.980 computers 0.764 do n’t 1.602 shows 0.673 very 0.720 materials 1.357 strong 0.661 would 0.680 censorship 1.326 problem 0.647 could 0.679 person 1.288 grateful 0.624 , and 0.676 read 1.286 dirigibles 0.599 family 0.666 children 1.234 books 0.599 less time 0.661 offensive . 1.216 her new 0.579 spend 0.659 those ... ... ... ... ... ... 1.068 urban areas 0.343 benefit our society 0.480 should be able 1.007 airships 0.341 believe that computers 0.475 able to Table 5: Highest weighted lexical features (i.e. unigrams, bigrams, or trigrams) and their weights in both shared and task-specific representations of the All-MTL-cTAP model (associated with results in Table 4) for the two example tasks referred to in Table 1. both task-specific and shared-representations in a theoretically sound manner (i.e. without making any speculative assumptions about the relative orderings of essays that were graded on different scales using different marking criteria), and is general enough to be used in many situations. Ultimately these complementary techniques (multi-task learning and constrained pairwise preference-ranking) allow essay scoring data from any source to be included during training. As shown in Section 7.2, our approach is robust to increases in the number of tasks, meaning that one can freely add extra data when available and expect the approach to use this data appropriately. This constrained multi-task preferenceranking approach is likely to be useful for many applications of multi-task learning, when the goldscores across tasks are not directly comparable. Future work will aim to study different dimensions of the prompt (e.g. genre, topic) using multitask learning at a finer level. We also aim to further study the characteristics of the multi-task model in order to determine which features transfer well across tasks. Another avenue of potential research is to use multi-task learning to predict scores for different aspects of text quality (e.g. coherence, grammaticality, topicality). Acknowledgements We would like to thank Cambridge English Language Assessment for supporting this research, and the anonymous reviewers for their useful feedback. We would also like to thank Ekaterina Kochmar, Helen Yannakoudakis, Marek Rei, and Tamara Polajnar for feedback on early drafts of this paper. References Øistein E Andersen, Helen Yannakoudakis, Fiona Barker, and Tim Parish. 2013. Developing and testing a self-assessment and tutoring system. In Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications, BEA, pages 32–41. Yigal Attali and Jill Burstein. 2006. Automated essay scoring with e-rater R⃝v. 2. The Journal of Technology, Learning and Assessment, 4(3). Ted Briscoe, John Carroll, and Rebecca Watson. 2006. The second release of the RASP system. In Proceedings of the COLING/ACL 2006 Interactive Presentation Sessions, pages 77–80, Sydney, Australia, July. Association for Computational Linguistics. Ted Briscoe, Ben Medlock, and Øistein Andersen. 2010. Automated assessment of esol free text examinations. Technical Report 790, The Computer Lab, University of Cambridge, February. Olivier Chapelle and S Sathiya Keerthi. 2010. Efficient algorithms for ranking with svms. Information Retrieval, 13(3):201–215. Hongbo Chen and Ben He. 2013. Automated essay scoring by maximizing human-machine agreement. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1741–1752, Seattle, Washington, USA, October. Association for Computational Linguistics. Hao Cheng, Hao Fang, and Mari Ostendorf. 2015. Open-domain name error detection using a multitask RNN. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 737–746. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning, pages 160–167. ACM. 797 Daniel Dahlmeier, Hwee Tou Ng, and Siew Mei Wu. 2013. Building a large annotated corpus of learner english: The nus corpus of learner english. In Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications, pages 22–31, Atlanta, Georgia, June. Association for Computational Linguistics. Hal Daume III. 2007. Frustratingly easy domain adaptation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 256–263, Prague, Czech Republic, June. Association for Computational Linguistics. Kevin Duh, Katsuhito Sudoh, Hajime Tsukada, Hideki Isozaki, and Masaaki Nagata, 2010. Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR, chapter NBest Reranking by Multitask Learning, pages 375– 383. Association for Computational Linguistics. Mariano Felice, Zheng Yuan, Øistein E. Andersen, Helen Yannakoudakis, and Ekaterina Kochmar. 2014. Grammatical error correction using hybrid systems and type filtering. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task, CoNLL 2014, Baltimore, Maryland, USA, June 26-27, 2014, pages 15– 24. Adriano Ferraresi, Eros Zanchetta, Marco Baroni, and Silvia Bernardini. 2008. Introducing and evaluating ukwac, a very large web-derived corpus of english. In Proceedings of the 4th Web as Corpus Workshop (WAC-4) Can we beat Google, pages 47–54. Michael Heilman and Nitin Madnani. 2013. Ets: domain adaptation and stacking for short answer scoring. In Proceedings of the 2nd joint conference on lexical and computational semantics, volume 2, pages 275–279. Derrick Higgins, Jill Burstein, Daniel Marcu, and Claudia Gentile. 2004. Evaluating multiple aspects of coherence in student essays. In HLT-NAACL, pages 185–192. D. Higgins, J. Burstein, and Y. Attali. 2006. Identifying off-topic student essays without topic-specific training data. Natural Language Engineering, 12(2):145–159. Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 133– 142. ACM. N Van Labeke, D Whitelock, D Field, S Pulman, and JTE Richardson. 2013. Openessayist: extractive summarisation and formative assessment of freetext essays. In Proceedings of the 1st International Workshop on Discourse-Centric Learning Analytics, Leuven, Belgium, April. Thomas K Landauer, Peter W Foltz, and Darrell Laham. 1998. An introduction to latent semantic analysis. Discourse processes, 25(2-3):259–284. Leah S Larkey. 1998. Automatic essay grading using text categorization techniques. In Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval, pages 90–95. ACM. Courtney Napoles and Chris Callison-Burch. 2015. Automatically scoring freshman writing: A preliminary investigation. In Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 254–263, Denver, Colorado, June. Association for Computational Linguistics. Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christopher Bryant. 2014. The conll-2014 shared task on grammatical error correction. In In Proceedings of the Seventeenth Conference on Computational Natural Language Learning: Shared Task (CoNLL-2013 Shared Task). Association for Computational Linguistics. Diane Nicholls. 2003. The cambridge learner corpus: Error coding and analysis for lexicography and elt. In Proceedings of the Corpus Linguistics 2003 conference, volume 16, pages 572–581. Ellis B Page. 1966. The imminence of grading essays by computer. Phi Delta Kappan, 47:238–243. Ellis Batten Page. 1994. Computer grading of student prose, using modern concepts and software. The Journal of experimental education, 62(2):127–142. Ellis Batten Page. 2003. Project essay grade: Peg. Automated essay scoring: A cross-disciplinary perspective, pages 43–54. Isaac Persing and Vincent Ng. 2014. Modeling prompt adherence in student essays. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1534–1543, Baltimore, Maryland, June. ACL. Isaac Persing and Vincent Ng. 2015. Modeling argument strength in student essays. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 543–552. Peter Phandi, Kian Ming A. Chai, and Hwee Tou Ng. 2015. Flexible domain adaptation for automated essay scoring using correlated linear regression. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 431–439, Lisbon, Portugal, September. Association for Computational Linguistics. 798 Alla Rozovskaya and Dan Roth. 2011. Algorithm selection and model adaptation for esl correction tasks. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, HLT ’11, pages 924–933, Stroudsburg, PA, USA. Association for Computational Linguistics. Lawrence M Rudner and Tahung Liang. 2002. Automated essay scoring using bayes’ theorem. The Journal of Technology, Learning and Assessment, 1(2). Swapna Somasundaran, Jill Burstein, and Martin Chodorow. 2014. Lexical chaining for measuring discourse coherence quality in test-taker essays. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 950–961, Dublin, Ireland, August. Dublin City University and Association for Computational Linguistics. Karen Sparck Jones. 1972. A statistical interpretation of term specificity and its application in retrieval. Journal of documentation, 28(1):11–21. James H Steiger. 1980. Tests for comparing elements of a correlation matrix. Psychological bulletin, 87(2):245. Helen Yannakoudakis and Ted Briscoe. 2012. Modeling coherence in esol learner texts. In Proceedings of the Seventh Workshop on Building Educational Applications Using NLP, pages 33–43, Montr´eal, Canada, June. Association for Computational Linguistics. Helen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. A new dataset and method for automatically grading esol texts. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 180–189, Portland, Oregon, USA, June. Association for Computational Linguistics. Alexander Yeh. 2000. More accurate tests for the statistical significance of result differences. In Proceedings of the 18th Conference on Computational Linguistics - Volume 2, COLING ’00, pages 947– 953, Stroudsburg, PA, USA. Association for Computational Linguistics. 799
2016
75
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 800–810, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics CFO: Conditional Focused Neural Question Answering with Large-scale Knowledge Bases Zihang Dai∗ Carnegie Mellon University [email protected] Lei Li∗ Toutiao.com [email protected] Wei Xu Baidu Research [email protected] Abstract How can we enable computers to automatically answer questions like “Who created the character Harry Potter”? Carefully built knowledge bases provide rich sources of facts. However, it remains a challenge to answer factoid questions raised in natural language due to numerous expressions of one question. In particular, we focus on the most common questions — ones that can be answered with a single fact in the knowledge base. We propose CFO, a Conditional Focused neuralnetwork-based approach to answering factoid questions with knowledge bases. Our approach first zooms in a question to find more probable candidate subject mentions, and infers the final answers with a unified conditional probabilistic framework. Powered by deep recurrent neural networks and neural embeddings, our proposed CFO achieves an accuracy of 75.7% on a dataset of 108k questions – the largest public one to date. It outperforms the current state of the art by an absolute margin of 11.8%. 1 Introduction Community-driven question answering (QA) websites such as Quora, Yahoo-Answers, and Answers.com are accumulating millions of users and hundreds of millions of questions. A large portion of the questions are about facts or trivia. It has been a long pursuit to enable machines to answer such questions automatically. In recent years, several efforts have been made on utilizing open-domain knowledge bases to answer factoid questions. A knowledge ∗Part of the work was done while at Baidu. base (KB) consists of structured representation of facts in the form of subject-relation-object triples. Lately, several large-scale generalpurpose KBs have been constructed, including YAGO (Suchanek et al., 2007), Freebase (Bollacker et al., 2008), NELL (Carlson et al., 2010), and DBpedia (Lehmann et al., 2014). Typically, structured queries with predefined semantics (e.g. SPARQL) can be issued to retrieve specified facts from such KBs. Thus, answering factoid questions will be straightforward once they are converted into the corresponding structured form. However, due to complexity of language, converting natural language questions to structure forms remains an open challenge. Among all sorts of questions, there is one category that only requires a single fact (triple) in KB as the supporting evidence. As a typical example, the question “Who created the character Harry Potter” can be answered with the single fact (HarryPotter, CharacterCreatedBy, J.K.Rowling). In this work, we refer to such questions as single-fact questions. Previously, it has been observed that single-fact questions constitute the majority of factoid questions in community QA sites (Fader et al., 2013). Despite the simplicity, automatically answering such questions remains far from solved — the latest best result on a dataset of 108k single-fact questions is only 63.9% in terms of accuracy (Bordes et al., 2015). To find the answer to a single-fact question, it suffices to identify the subject entity and relation (implicitly) mentioned by the question, and then forms a corresponding structured query. The problem can be formulated into a probabilistic form. Given a single-fact question q, finding the subjectrelation pair ˆs, ˆr from the KB K which maximizes the conditional probability p(s, r|q), i.e. ˆs, ˆr = arg max s,r∈K p(s, r|q) (1) 800 Based on the formulation (1), the central problem is to estimate the conditional distribution p(s, r|q). It is very challenging because of a) the vast amount of facts — a large-scale KB such as Freebase contains billions of triples, b) the huge variety of language — there are multiple aliases for an entity, and numerous ways to compose a question, c) the severe sparsity of supervision — most combinations of s, r, q are not expressed in training data. Faced with these challenges, existing methods have exploited to incorporate prior knowledge into semantic parsers, to design models and representations with better generalization property, to utilize large-margin ranking objective to estimate the model parameters, and to prune the search space during inference. Noticeably, models based on neural networks and distributed representations have largely contributed to the recent progress (see section 2). In this paper, we propose CFO, a novel method to answer single-fact questions with large-scale knowledge bases. The contributions of this paper are, • we employ a fully probabilistic treatment of the problem with a novel conditional parameterization using neural networks, • we propose the focused pruning method to reduce the search space during inference, and • we investigate two variations to improve the generalization of representations for millions of entities under highly sparse supervision. In experiments, CFO achieves 75.7% in terms of top-1 accuracy on the largest dataset to date, outperforming the current best record by an absolute margin of 11.8%. 2 Related Work The research of KB supported QA has evolved from earlier domain-specific QA (Zelle and Mooney, 1996; Tang and Mooney, 2001; Liang et al., 2013) to open-domain QA based on largescale KBs. An important line of research has been trying to tackle the problem by semantic parsing, which directly parses natural language questions into structured queries (Liang et al., 2011; Cai and Yates, 2013; Kwiatkowski et al., 2013; Yao and Van Durme, 2014). Recent progresses include designing KB specific logical representation and parsing grammar (Berant et al., 2013), using distant supervision (Berant et al., 2013), utilizing paraphrase information (Fader et al., 2013; Berant and Liang, 2014), requiring little questionanswer pairs (Reddy et al., 2014), and exploiting ideas from agenda-based parsing (Berant and Liang, 2015). In contrast, another line of research tackles the problem by deep learning powered similarity matching. The core idea is to learn semantic representations of both the question and the knowledge from observed data, such that the correct supporting evidence will be the nearest neighbor of the question in the learned vector space. Thus, a main difference among several approaches lies in the neural networks proposed to represent questions and KB elements. While (Bordes et al., 2014b; Bordes et al., 2014a; Bordes et al., 2015; Yang et al., 2014) use relatively shallow embedding models to represent the question and knowledge, (Yih et al., 2014; Yih et al., 2015) employ a convolutional neural network (CNN) to produce the representation. In the latter case, both the question and the relation are treated as a sequence of letter-trigram patterns, and fed into two parameter shared CNNs to get their embeddings. What’s more, instead of measuring the similarity between a question and an evidence triple with a single model as in (Bordes et al., 2015), (Yih et al., 2014; Yih et al., 2015) adopt a multi-stage approach. In each stage, one element of the triple is compared with the question to produce a partial similarity score by a dedicated model. Then, these partial scores are combined to generate the overall measurement. Our proposed method is closely related to the second line of research, since neural models are employed to learn semantic representations. As in (Bordes et al., 2015; Yih et al., 2014), we focus on single-fact questions. However, we propose to use recurrent neural networks (RNN) to produce the question representation. More importantly, our method follows a probabilistic formulation, and our parameterization relies on factors other than similarity measurement. Besides KB-based QA, our work is also loosely related to work using deep learning systems in QA tasks with free text evidences. For example, (Iyyer et al., 2014) focuses questions from the quiz bowl competition with recursive neural network. New architectures including memory networks (Weston et al., 2015), dynamic memory networks (Kumar et al., 2015), and more (Peng et al., 2015; Lee et al., 2015) have been explored under the bAbI syn801 thetic QA task (Weston et al., 2016). In addition, (Hermann et al., 2015) seeks to answer Cloze style questions based on news articles. 3 Overview In this section, we formally formulate the problem of single-fact question answering with knowledge bases. A knowledge base K contains three components: a set of entities E, a set of relations R, and a set of facts F = {⟨s, r, o⟩} ⊆E × R × E, where s, o ∈E are the subject and object entities, and r ∈R is a binary relation. E(r), E(s) are the vector representations of a relation and an entity, respectively. s →r indicates that there exists some entity o such that ⟨s, r, o⟩∈F. For singlefact questions, a common assumption is that the answer entity o and some triple ⟨si, rk, o⟩∈F reside in the given knowledge base. The goal of our model is to find such subject si and relation rk mentioned or implied in the question. Once found, a structured query (e.g. in SPARQL) can be constructed to retrieve the result entity. 3.1 Conditional Factoid Factorization Given a question q, the joint conditional probability of subject-relation pairs p(s, r|q) can be used to retrieve the answer using the exact inference defined by Eq. (1). However, since there can be millions of entities and thousands of relations in a knowledge base, it is less effective to model p(s, r|q) directly. Instead, we propose a conditional factoid factorization, p(s, r|q) = p(r|q) · p(s|q, r) (2) and utilize two neural networks to parameterize each component, p(r|q) and p(s|q, r), respectively. Hence, our proposed method contains two phases: inferring the implied relation r from the question q, and inferring the mentioned subject entity s given the relation r and the question q. There is an alternative factorization p(s, r|q) = p(s|q)·p(r|s, q). However, it is rather challenging to estimate p(s|q) directly due to the vast amount of entities (> 106) in a KB. In comparison, our proposed factorization takes advantage of the relatively limited number of relations (on the order of thousands). What’s more, by exploiting additional information from the candidate relation r, it’s more feasible to model p(s|q, r) than p(s|q), leading to more robust estimation. A key difference from prior multi-step approach is that our method do not assume any independence between the target subject and relation given a question, as does in the prior method (Yih et al., 2014). It proves effective in our experiments. 3.2 Inference via Focused Pruning As defined by the Eq. (1), a solution needs to consider all available subject-relation pairs in the KB as candidates. With a large-scale KB, the number of candidates can be notoriously large, resulting in a extremely noisy candidate pool. We propose a method to prune the candidate space. The pruning is equivalent to a function that takes a KB K and a question q as input, and outputs a much limited set C of candidate subject-relation pairs. H(K, q) →C (3) Cs and Cr are used to represent the subject and relation candidates, respectively. The fundamental intuition for pruning is that the subject entity must be mentioned by some textual substring (subject mention) in the question. Thus, the candidate space can be restricted to entities whose name/alias matches an n-gram of the question, as in (Yih et al., 2014; Yih et al., 2015; Bordes et al., 2015). We refer to this straight-forward method as N-Gram pruning. By considering all ngrams, this approach usually achieves a high recall rate. However, the candidate pool is still noisy due to many non-subject-mention n-grams. Our key idea is to reduce the noise by guiding the pruning method’s attention to more probable parts of a question. An observation is that certain parts of a sentence are more likely to be the subject mention than others. For example, “Harry Potter” in “Who created the character Harry Potter” is more likely than “the character”, “character Harry”, etc. Specifically, our method employs a deep network to identify such focus segments in a question. This way, the candidate pool can be not only more compact, but also significantly less noisy. Finally, combing the ideas of Eq.(2) and (3), we propose an approximate solution to the problem defined by Eq. (1) ˆs, ˆr ≈arg max s,r∈C p(s|q, r)p(r|q) (4) 4 Proposed CFO In this section, we first review the gated recurrent unit (GRU), an RNN variant extensively used in 802 this work. Then, we describe the model parameterization of p(r|q) and p(s|q, r), and the focused pruning method in inference. 4.1 Review: Gated Recurrent Units In this work we employ GRU (Cho et al., 2014) as the RNN structure. At time step t, a GRU computes its hidden state ht using the following compound functions z = sigmoid (Wxzxt + Whzht−1 + bz) (5) r = sigmoid (Wxrxt + Whrht−1 + br) (6) ˜h = tanh (Wxhxt + r ⊗Whhht−1 + bh) (7) ht = z ⊗ht−1 + (1 −z) ⊗˜h (8) where W{·}, and b{·} are all trainable parameters. To better capture the context information on both sides, two GRUs with opposite directions can be combined to form a bidirectional GRU (BiGRU). 4.2 Model Parameterization Relation network In this work, the probability of relations given a question, p(r|q), is modeled by the following network pθr(r|q) = exp v(r, q)  P r′ exp v(r′, q)  (9) where the relation scoring function v(r, q) measures the similarity between the question and the relation v(r, q) = f(q)⊤E(r) (10) E(r) is the trainable embedding of the relation (randomly initialized in this work) and f(q) computes the semantic question embedding. Specifically, the question q is represented as a sequence of tokens (potentially with unknown ones). Then, the question embedding model f consists of a word embedding layer to transform tokens into distributed representations, a two-layer BiGRU to capture the question semantics, and a linear layer to project the final hidden states of the BiGRU into the same vector space as E(r). Subject network As introduced in section 3, the factor p(s|q, r) models the fitness of a subject s appearing in the question q, given the main topic is about the relation r. Thus, two forces a) the raw context expressed by q, and b) the candidate topic described by r, jointly impact the fitness of the subject s. For simplicity, we use two additive terms to model the joint effect pθs(s|q, r) = exp u(s, r, q)  P s′ exp u(s′, r, q)  (11) where u(s, r, q) is the subject scoring function, u(s, r, q) = g(q)⊤E(s) + αh(r, s) (12) g(q) is another semantic question embedding, E(s) is a vector representation of a subject, h(r, s) is the subject-relation score, and α is the weight parameter used to trade off the two sources. Firstly, the context score g(q)⊤E(s) models the intrinsic plausibility that the subject s appears in the question q using vector space similarity. As g(q)⊤E(s) has the same form as equation (10), we let g adpot the same model structure as f. However, initializing E(s) randomly and training it with supervised signal, just like training E(r), is insufficient in practice — while a large-scale KB has millions of subjects, only thousands of question-triple pairs are available for training. To alleviate the problem, we seek two potential solutions: a) pretrained embeddings, and b) type vector representation. The pretrained embedding approach utilizes unsupervised method to train entity embedings. In particular, we employ the TransE (Bordes et al., 2013), which trains the embedings of entities and relations by enforcing E(s) + E(r) = E(o) for every observed triple (s, r, o) ∈K. As there exists other improved variants (Gu et al., 2015), TransE scales the best when KB size grows. Alternatively, type vector is a fixed (not trainable) vector representation of entities using type information. Since each entity in the KB has one or more predefined types, we can encode the entity as a vector (bag) of types. Each dimension of a type vector is either 1 or 0, indicating whether the entity is associated with a specific type or not. Thus, the dimensionality of a type vector is equal to the number of types in KB. Under this setting, with E(s) being a binary vector, let g(q) be a continuous vector with arbitrary value range can be problematic. Therefore, when type vector is used as E(s), we add a sigmoid layer upon the final linear projection of g, squashing each element of g(q) to the range [0, 1]. Compared to the first solution, type vector is fully based on the type profile of an entity, and requires no training. As a benefit, considerably 803 Who created ... Potter? 𝐸(𝑟$) 𝐸(𝑠') 𝐸(𝑠() 𝐸(𝑠)) 𝐸(𝑠*) … Linear Projection (+ Sigmoid) 𝑔(𝑞) 𝑝(𝑠(|𝑞,𝑟$) BiGRU Word Embed. Concat BiGRU Figure 1: Overall structure of the subject network. Sigmoid layer is added only when type vector is used as E(s). fewer parameters are needed. Also, given the type information is discriminative enough, using type vector will lead to easier generalization. However, containing only type information can be very restrictive. In addition to the context score, we use the subject-relation score h(r, s) to capture the compatibility that s and r show up together. Intuitively, for an entity to appear in a topic characterized by a relation, a necessary condition will be that the entity has the relation connected to it. Inspired by this structural regularity, in the simplest manner, we instantiate the idea with an indicator function, h(r, s) = 1(s →r) (13) As there exists other more sophisticated statistical parameterizations, the proposed approach is able to capture the core idea of the structural regularity without any parameter. Finally, putting two scores together, Fig.1 summarizes the overall structure of the subject network. 4.3 Focused Pruning As discussed in section 3.2, N-Gram pruning is still subject to large amount of noise in inference due to many non-subject-mention n-grams. Motivated by this problem, we propose to reduce such noise by focusing on more probable candidates using a special-purpose sequence labeling network. Basically, a sequence labeling model is trained to tag some consecutive tokens as the subject mention. Following this idea, during inference, only the most probable n-gram predicted by the model will be retained, and then used as the subject mention to generate the candidate pool C. Hence, we refer to this method as focused pruning. Formally, let W(q) be all the n-grams of the question q, p(w|q) be the probability that the n-gram w is the subject mention of q, the focused pruning function Hs is defined as ˆw = arg max w∈W(q) pκ(w|q) C = {(s, r) : M(s, ˆw), s →r} (14) where M(s, ˆw) represents some predefined match between the subject s and the predicted subject mention ˆw. Intuitively, this pruning method resembles the human behavior of first identifying the subject mention with the help of context, and then using it as the key word to search the KB. To illustrate the effectiveness of this idea, we parameterize pκ(w|q) with a general-purpose neural labeling model, which consists of a word embedding layer, two layers of BiGRU, and a linearchain conditional random field (CRF). Thus, given a question q of length T, the score of a sequence label configuration y ∈RT is s(y, q) = T X t=1 H(q)t,yt + T X t=2 Ayt−1,yt where H(q) is the hidden output of the top-layer BiGRU, A is the transition matrix possesed by the CRF, and [·]i,j indicates the matrix element on row i collum j. Finally, the match function M(s, ˆw) is simply defined as either strict match between an alias of s and ˆw, or approximate match provided by the Freebase entity suggest API 1. Note that more elaborative match function can further boost the performance, but we leave it for future work. 5 Parameter Estimation In this section, we discuss the parameter estimation for the neural models presented in section 4. With standard parameterization, the focused labeling model pκ(w|q) can be directly trained by maximum likelihood estimation (MLE) and backpropagation. Thus, we omit the discussion here, and refer readers to (Huang et al., 2015) for details. Also, we leave the problem of how to obtain the training data to section 6. 5.1 Decomposable Log-Likelihood To estimate the parameters of pθr(r|q) and pθs(s|r, q), MLE can be utilized to maximize the empirical (log-)likelihood of subject-relation pairs 1The approximate match is used only when there is no strict match. The suggest API takes a string as input, and returns no more than 20 potentially matched entities. 804 given the associated question. Following this idea, let {s(i), r(i), q(i)}N i=1 be the training dataset, the MLE solution takes the form θMLE = arg max θr,θs N X i=1  log pθr(r(i)|q(i)) + log pθs(s(i)|r(i), q(i))  (15) Note that there is no shared parameter between pθs(s|q, r) and pθr(r|q). 2 Therefore, the same solution can be reached by separately optimizing the two log terms, i.e. θMLE r = arg max θr N X i=1 log pθr(r(i)|q(i)) θMLE s = arg max θs N X i=1 log pθs(s(i)|r(i), q(i)) (16) It is important to point out that the decomposability does not always hold. For example, when the parametric form of h(s, r) depends on the embedding of r, the two terms will be coupled and joint optimization must be performed. From this perspective, the simple form of h(s, r) also eases the training by inducing the decomposability. 5.2 Approximation with Negative Samples As the two problems defined by equation (16) take the standard form of classification, theoretically, cross entropy can used as the training objective. However, computing the partition function is often intractable, especially for pθs(s|r, q), since there can be millions of entities in the KB. Faced with this problem, classic solutions include contrastive estimation (Smith and Eisner, 2005), importance sampling approximation (Bengio et al., 2003), and hinge loss with negative samples (Collobert and Weston, 2008). In this work, we utilize the hinge loss with negative samples as the training objective. Specifically, the loss function w.r.t θr has the form L(θr) = N X i=1 Mr X j=1 max  0, γr −v(r(i), q(i)) + v(r(j), q(i))  (17) where r(j) is one of the Mr negative samples (i.e. s(i) ̸→r(j)) randomly sampled from R, and γr is 2Word embeddings are not shared across models. the predefined margin. Similarly, the loss function w.r.t θs takes the form L(θs) = N X i=1 Ms X j=1 max  0, γs −u(s(i), r(i), q(i)) + u(s(j), r(i), q(i))  (18) Despite the negative sample based approximation, there is another practical difficulty when type vector is used as the subject representation. Specifically, computing the value of u(s(j), r(i), q(i)) requires to query the KB for all types of each negative sample s(j). So, when Ms is large, the training can be extremely slow due to the limited bandwidth of KB query. Consequently, under the setting of type vector, we instead resort to the following type-wise binary cross-entropy loss ˜L(θs) = − N X i=1 K X k=1  E(s(i))k log g(q(i))k +  1 −E(s(i))k  log  1 −g(q(i))k  (19) where K is the total number of types, g(q)k and E(s(i))k are the k-th element of g(q) and E(s(i)) respectively. Intuitively, with sigmoid squashed output, g(q) can be regarded as K binary classifiers, one for each type. Hence, g(q)k reprents the predicted probability that the subject is associated with the k-th type. 6 Experiments In this section, we conduct experiments to evaluate the proposed system empirically. 6.1 Dataset and Knowledge Base We train and evaluate our method on the SIMPLEQUESTIONS dataset3 — the largest question-triple dataset. It consists of 108,442 questions written in English by human annotators. Each question is paired with a subject-relation-object triple from Freebase. We follow the same splitting for training (70%), validation (10%) and testing (20%) as (Bordes et al., 2015). We use the same subset of Freebase (FB5M) as our knowledge base so that the results are directly comparable. It includes 4,904,397 entities, 7,523 relations, and 22,441,880 facts. There are alternative datasets available, such as WebQuestions (Berant et al., 2013) and 3https://research.facebook.com/ researchers/1543934539189348 805 Free917 (Cai and Yates, 2013). However, these datasets are quite restricted in sample size — the former includes 5,810 samples (train + test) and the latter includes 917 ones. They are fewer than the number of relations in Freebase. To train the focused labeling model, the information about whether a word is part of the subject mention is needed. We obtain such information by reverse linking from the ground-truth subject to its mention in the question. Given a question q corresponding to subject s, we match the name and aliases of s to all n-grams that can be generated from q. Once a match is found, we label the matched n-gram as the subject mention. In the case of multiple matches, only the longest matched n-gram is used as the correct one. 6.2 Evaluation and Baselines For evaluation, we consider the same metric introduced in (Bordes et al., 2015), which takes the prediction as correct if both the subject and relation are correctly retrieved. Based on this metric, we compare CFO with a few baseline systems, which include both the Memory Network QA system (Bordes et al., 2015), and systems with alternative components and parameterizations from existing work (Yih et al., 2014; Yih et al., 2015). We did not compare with alternative subject networks because the only existing method (Yih et al., 2014) relies on unique textual name of each entity, which does not generally hold in knowledge bases (except in REVERB). Alternative approaches for pruning method, relation network, and entity representation are described below. Pruning methods We consider two baseline methods previously used to prune the search space. The first baseline is the N-Gram pruning method introduced in Section 3, as it has been successfully used in previous work (Yih et al., 2014; Yih et al., 2015). Basically, it establishes the candidate pool by retaining subject-relation pairs whose subject can be linked to one of the n-grams generated from the question. The second one is NGram+, a revised version of the N-Gram pruning with additional heuristics (Bordes et al., 2015). Instead of considering all n-grams that can be linked to entities in KB, heuristics related to overlapping n-grams, stop words, interrogative pronouns, and so on are exploited to further shrink the n-gram pool. Accordingly, the search space is restricted to subject-relation pairs whose subject can be linked to one of the remaining n-grams after applying the heuristic filtering. Relation scoring network We compare our proposed method with two previously used models. The first baseline is the embedding average model (Embed-AVG) used in (Bordes et al., 2014a; Bordes et al., 2014b; Bordes et al., 2015). Basically, it takes the element-wise average of the word embeddings of the question to be the question representation. The second one is the letter-tri-gram CNN (LTG-CNN) used in (Yih et al., 2014; Yih et al., 2015), where the question and relation are separately embedded into the vector space by two parameter shared LTG-CNNs. 4 In addition, (Yih et al., 2014; Yih et al., 2015) observed better performance of the LTG-CNN when substituting the subject mention with a special symbol. Naturally, this can be combined with the proposed focused labeling, since the latter is able to identify the potential subject mention in the question. So, we train another LTG-CNN with symbolized questions, which is denoted as LTG-CNN+. Note that this model is only tested when the focused labeling pruning is used. Entity representation In section 4.2, we describe two possible ways to improve the vector representation of the subject, TransE pretrained embedding and type vectors. To evaluate their effectiveness, we also include this variation in the experiment, and compare their performance with randomly initialized entity embeddings. 6.3 Experiment Setting During training, all word embeddings are initialized using the pretrained GloVe (Pennington et al., 2014), and then fine tuned in subsequent training. The word embedding dimension is set to 300, and the BiGRU hidden size 256. For pretraining the entity embeddings using TransE (see section 4.2), only triples included in FB5M are used. All other parameters are randomly initialized uniformly from [−0.08, 0.08], following (Graves, 2013). Both hinge loss margins γs and γr are set to 0.1. Negative sampling sizes Ms and Mr are both 1024. For optimization, parameters are trained using mini-batch AdaGrad (Duchi et al., 2011) with Momentum (Pham et al., 2015). Learning rates are 4In Freebase, each predefined relation has a single humanrecognizable reference form, usually a sequence of words. 806 Pruning Method Relation Network Entity Representation Random Pretrain Type Vec Memory Network 62.9 63.9∗ N-Gram Embed-AVG 39.4 42.2 50.9 LTG-CNN 32.8 36.8 45.6 BiGRU 43.7 46.7 55.7 N-Gram+ Embed-AVG 53.8 57.0 58.7 LTG-CNN 46.3 50.9 56.0 BiGRU 58.3 61.6 62.6 Focused Pruning Embed-AVG 71.4 71.7 72.1 LTG-CNN 67.6 67.9 68.6 LTG-CNN+ 70.2 70.4 71.1 BiGRU 75.2 75.5 75.7 Table 1: Accuracy on SIMPLEQUESTIONS testing set. ∗indicates using ensembles. N-Gram+ uses additional heuristics. The proposed CFO (focused pruning + BiGRU + type vector) achieves the top accuracy. tuned to be 0.001 for question embedding with type vector, 0.03 for LTG-CNN methods, and 0.02 for rest of the models. Momentum rate is set to 0.9 for all models, and the mini-batch size is 256. In addition, vertical dropout (Pham et al., 2014; Zaremba et al., 2014) is used to regularize all BiGRUs in our experiment. 5 6.4 Results Trained on 75,910 questions, our proposed model and baseline methods are evaluated on the testing set with 21,687 questions. Table 1 presents the accuracy of those methods. We evaluated all combinations of pruning methods, relation networks and entity representation schemes, as well as the result from memory network, as described in Section 6.1. CFO (focused pruning + BiGRU + type vector) achieves the best performance, outperforming all other methods by substantial margins. By inspecting vertically within each cell in Table 1, for the same pruning methods and entity representation scheme, BiGRU based relation scoring network boosts the accuracy by 3.5 % to 4.8% compared to the second best alternative. This evidence suggests the superiority of RNN in capturing semantics of question utterances. Surprisingly, it turns out that Embed-AVG achieves better performance than the more complex LTG-CNN. By inspecting Table 1 horizontally, type vector based representation constantly leads to better performance, especially when N-Gram pruning is used. It suggests that under sparse supervision, training high-quality distributed knowledge repre5For more details, source code is available at http:// zihangdai.github.io/cfo for reference. sentations remains a challenging problem. That said, pretraining entity embeddings with TransE indeed gives better performance compared to random initialization, indicating the future potential of unsupervised methods in improving continuous knowledge representation. In addition, all systems using our proposed focused pruning method outperform their counterparts with alternative pruning methods. Without using ensembles, CFO is already better than the memory network ensembles by 11.8%. It substantiates the general effectiveness of the focused pruning with subject labeling method regardless of other sub-modules. 6.5 Effectiveness of Pruning According to the results in section 6.4, the focused pruning plays a critical role in achieving the best performance. To get a deeper understanding of its effectiveness, we analyze how the pruning methods affect the accuracy of the system. Due to space limit, we focus on systems with BiGRU as the relation scoring function and type vector as the entity representation. Table 2 summarizes the recall — the percentage of pruned subject-relation candidates containing the answer — and the resulting accuracy. The single-subject case refers to the scenario that there is only one candidate entity in Cs (possibly with multiple relations), and the multi-subject case means there are multiple entities in Cs. As the table shows, focused pruning achieves comparable recall rate to N-Gram pruning.6 Given the state-of-the-art performance of sequence labeling systems, this result should not be surprising. Thus, the difference in performances entirely comes from their resulting accuracy. Notice that there exists a huge accuracy gap between the two cases. Essentially, in the single-candidate case, the system only need to identify the relation based on the more robust model pθr(r|q). In contrast, under the multi-candidate case, the system also relies on pθs(s|q, r), which has significantly more parameters to estimate, and thus is less robust. Consequently, by only focusing on the most probable sub-string, the proposed focused pruning produces much more single-candidate situations, leading to a better overall accuracy. 6Less than 3% of the recalled candidates rely on approximate matching in the focused pruning. 807 Pruning method Pruning recall Inference accuracy within the recalled Overall accuracy Single-subject case Multi-subject case N-Gram 94.8% 18 / 21 = 85.7% 12051 / 20533 = 58.7% 55.7% N-Gram+ 92.9% 126 / 138 = 91.3% 13460 / 20017 = 67.2% 62.6% Focused pruning 94.9% 9925 / 10705 = 92.7% 6482 / 9876 = 65.6% 75.7% Table 2: Comparison of different space pruning methods. N-Gram+ uses additional heuristics. Single- and multi-subject refers to the number of distinct subjects in candidates. The proposed focused pruning achieves best scores. 6.6 Additional Analysis In the aforementioned experiments, we have kept the focused labeling model and the subject scoring network fixed. To further understand the importance and sensitivity of this specific model design, we investigate some variants of these two models. Alternative focus with CRF RNN-CRF based models have achieved the state-of-the-art performance on various sequence labeling tasks (Huang et al., 2015; Lu et al., 2015). However, the labeling task we consider here is relatively unsophisticated in the sense that there are only two categories of labels - part of subject string (SUB) or not (O). Thus, it’s worth investigating whether RNN (BiGRU in our case) is still a critical component when the task gets simple. Hence, we establish a CRF baseline which uses traditional features as input. Specifically, the model is trained with Stanford CRF-NER toolkit 7 on the same reversely linked labeling data (section 6.1). For evaluation, we directly compare the sentence level accuracy of these two models on the test portion of the labeling data. A sentence labeling is considered correct only when all tokens are correctly labeled. 8 It turns out the RNN-CRF achieves an accuracy of 95.5% while the accuracy of feature based CRF is only 91.2%. Based on the result, we conclude that BiGRU plays a crucial role in our focused pruning module. Subject scoring with average embedding As discussed in section 4.2, the subject network g is chosen to be the same as f, mainly relying on a two-layer BiGRU to produce the semantic question embeding. Although it is a natural choice, it remains unclear whether the final performance is sensitive to this design. Motivated by this question, we substitute the BiGRU with an EmbedAVG model, and evalute the system performance. 7http://nlp.stanford.edu/software/ CRF-NER.shtml 8As F-1 score is usually used as the metric for sequence labeling, sentence level accuracy is more informative here. Relation Network Subject Network Embed-AVG BiGRU Embed-AVG 71.6 72.1 LTG-CNN 68.0 68.6 LTG-CNN+ 70.4 71.1 BiGRU 75.4 75.7 Table 3: System performance with different subject network structures. For this experiment, we always use focused pruning and type vector, but vary the structure of the relation scoring network to allow high-order interaction across models. The result is summarized in Table 3. Insepcting the table horizontally, when BiGRU is employed as the subject network, the accuracy is consistently higher regardless of relation network structures. However, the margin is quite narrow, especially compared to the effect of varying the relation network structure the same way. We suspect this difference reflects the fact that modeling p(s|r, q) is intrinsically more challenging than modeling p(r|q). It also suggests that learning smooth entity representations with good discriminative power remains an open problem. 7 Conclusion In this paper, we propose CFO, a novel approach to single-fact question answering. We employ a conditional factoid factorization by inferring the target relation first and then the target subject associated with the candidate relations. To resolve the representation for millions of entities, we proposed type-vector scheme which requires no training. Our focused pruning largely reduces the candidate space without loss of recall rate, leading to significant improvement of overall accuracy. Compared with multiple baselines across three aspects, our method achieves the state-of-the-art accuracy on a 108k question dataset, the largest publicly available one. Future work could be extending the proposed method to handle more complex questions. 808 References Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. The Journal of Machine Learning Research, 3:1137–1155. Jonathan Berant and Percy Liang. 2014. Semantic parsing via paraphrasing. In Proceedings of ACL, volume 7, page 92. Jonathan Berant and Percy Liang. 2015. Imitation learning of agenda-based semantic parsers. Transactions of the Association for Computational Linguistics, 3:545–558. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1533–1544. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pages 1247–1250. ACM. Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Advances in Neural Information Processing Systems, pages 2787–2795. Antoine Bordes, Sumit Chopra, and Jason Weston. 2014a. Question answering with subgraph embeddings. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 615–620. Antoine Bordes, Jason Weston, and Nicolas Usunier. 2014b. Open question answering with weakly supervised embedding models. In Machine Learning and Knowledge Discovery in Databases, pages 165–180. Springer. Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075. Qingqing Cai and Alexander Yates. 2013. Large-scale semantic parsing via schema matching and lexicon extension. In ACL (1), pages 423–433. Citeseer. Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R Hruschka Jr, and Tom M Mitchell. 2010. Toward an architecture for never-ending language learning. In AAAI, volume 5, page 3. Kyunghyun Cho, Bart van Merrienboer, C¸ aglar G¨ulc¸ehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1724–1734. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning, pages 160– 167. ACM. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121–2159. Anthony Fader, Luke S Zettlemoyer, and Oren Etzioni. 2013. Paraphrase-driven learning for open question answering. In ACL (1), pages 1608–1618. Citeseer. Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850. Kelvin Gu, John Miller, and Percy Liang. 2015. Traversing knowledge graphs in vector space. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 318–327. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1684–1692. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. CoRR, abs/1508.01991. Mohit Iyyer, Jordan Boyd-Graber, Leonardo Claudino, Richard Socher, and Hal Daum´e III. 2014. A neural network for factoid question answering over paragraphs. In Empirical Methods in Natural Language Processing. Ankit Kumar, Ozan Irsoy, Jonathan Su, James Bradbury, Robert English, Brian Pierce, Peter Ondruska, Ishaan Gulrajani, and Richard Socher. 2015. Ask me anything: Dynamic memory networks for natural language processing. arXiv preprint arXiv:1506.07285. Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parsers with onthe-fly ontology matching. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Moontae Lee, Xiaodong He, Wen-tau Yih, Jianfeng Gao, Li Deng, and Paul Smolensky. 2015. Reasoning in vector space: An exploratory study of question answering. arXiv preprint arXiv:1511.06426. Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick van Kleef, S¨oren Auer, et al. 2014. Dbpedia-a large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web Journal, 5:1–29. Percy Liang, Michael I Jordan, and Dan Klein. 2011. Learning dependency-based compositional semantics. In Association for Computational Linguistics (ACL), pages 590– 599. Percy Liang, Michael I Jordan, and Dan Klein. 2013. Learning dependency-based compositional semantics. Computational Linguistics, 39(2):389–446. Zefu Lu, Lei Li, and Wei Xu. 2015. Twisted recurrent network for named entity recognition. In Bay Area Machine Learning Symposium. Baolin Peng, Zhengdong Lu, Hang Li, and Kam-Fai Wong. 2015. Towards neural network-based reasoning. arXiv preprint arXiv:1508.05508. 809 Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1532–1543. Vu Pham, Th´eodore Bluche, Christopher Kermorvant, and J´erˆome Louradour. 2014. Dropout improves recurrent neural networks for handwriting recognition. In Frontiers in Handwriting Recognition (ICFHR), 2014 14th International Conference on, pages 285–290. IEEE. Hieu Pham, Zihang Dai, and Lei Li. 2015. On optimization algorithms for recurrent networks with long shortterm memory. In Bay Area Machine Learning Symposium. Siva Reddy, Mirella Lapata, and Mark Steedman. 2014. Large-scale semantic parsing without question-answer pairs. Transactions of the Association for Computational Linguistics, 2:377–392. Noah A Smith and Jason Eisner. 2005. Contrastive estimation: Training log-linear models on unlabeled data. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 354–362. Association for Computational Linguistics. Fabian M Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowledge. In Proceedings of the 16th international conference on World Wide Web, pages 697–706. ACM. Lappoon R Tang and Raymond J Mooney. 2001. Using multiple clause constructors in inductive logic programming for semantic parsing. In Machine Learning: ECML 2001, pages 466–477. Springer. Jason Weston, Sumit Chopra, and Antoine Bordes. 2015. Memory networks. In International Conference on Learning Representations (ICLR2015). Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. 2016. Towards ai-complete question answering: A set of prerequisite toy tasks. In International Conference on Learning Representations (ICLR2016). Min-Chul Yang, Nan Duan, Ming Zhou, and Hae-Chang Rim. 2014. Joint relational embeddings for knowledgebased question answering. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 645–650. Xuchen Yao and Benjamin Van Durme. 2014. Information extraction over structured data: Question answering with freebase. In Proceedings of ACL. Wen-tau Yih, Xiaodong He, and Christopher Meek. 2014. Semantic parsing for single-relation question answering. In Proceedings of ACL. Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Proceedings of ACL. Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2014. Recurrent neural network regularization. CoRR, abs/1409.2329. John M Zelle and Raymond J Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proceedings of the National Conference on Artificial Intelligence, pages 1050–1055. 810
2016
76
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 811–822, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Verbs Taking Clausal and Non-Finite Arguments as Signals of Modality – Revisiting the Issue of Meaning Grounded in Syntax Judith Eckle-Kohler Research Training Group AIPHES and UKP Lab Computer Science Department, Technische Universit¨at Darmstadt www.aiphes.tu-darmstadt.de, www.ukp.tu-darmstadt.de Abstract We revisit Levin’s theory about the correspondence of verb meaning and syntax and infer semantic classes from a large syntactic classification of more than 600 German verbs taking clausal and non-finite arguments. Grasping the meaning components of Levin-classes is known to be hard. We address this challenge by setting up a multi-perspective semantic characterization of the inferred classes. To this end, we link the inferred classes and their English translation to independently constructed semantic classes in three different lexicons – the German wordnet GermaNet, VerbNet and FrameNet – and perform a detailed analysis and evaluation of the resulting German–English classification (available at www.ukp.tu-darmstadt. de/modality-verbclasses/). 1 Introduction Verbs taking clausal and non-finite arguments add a further meaning component to their embedded argument. For example, the embedded argument is realized as that-clause in (1) and (2), but understand in (1) marks it as factual and hope in (2) as uncertain. The verb pretend in (3) realizes its embedded argument as non-finite construction and marks it as non-factual. (1) He understands that his computer has a hardware problem. (2) She hopes that her experience will help others. (3) He pretends to take notes on his laptop, but really is updating his Facebook profile. The entities expressed by embedded clausal and non-finite arguments are also called “abstract object” (AO) in the rest of this paper (following Asher (1993)); we will use the linguistic term “modality” (Hacquard, 2011) to subsume the meanings (such as factuality, non-factuality and uncertainty) denoted by AO-selecting verbs. As AO-selecting verbs can change the meaning of a text in important ways, text understanding systems should be sensitive to them. In particular, classifications of AO-selecting verbs according to semantic criteria are important knowledge sources for a wide range of NLP applications, such as event tagging (Saur´ı et al., 2005), commited belief tagging (Prabhakaran et al., 2010), reported speech tagging (Krestel et al., 2008), the detection of uncertainty (Szarvas et al., 2012) and future-oriented content (Eckle-Kohler et al., 2008), textual entailment (Saur´ı and Pustejovsky, 2007; Lotan et al., 2013), or determining the degree of factuality of a given text (Saur´ı and Pustejovsky, 2012; de Marneffe et al., 2012). Accordingly, various semantic classifications of AO-selecting verbs have been developed, e.g., (Kiparsky and Kiparsky, 1970; Karttunen, 1971; Karttunen, 2012), some of them explicitly in the context of NLP (Nairn et al., 2006; Saur´ı, 2008). However, these classifications are constructed manually and often quite limited in coverage. Consequently, extending or adapting them to specific domains or other languages is a major issue. We propose to address this issue by exploiting the relationship between the syntactic behavior of verbs and their meaning following Levin’s theory (Levin, 1993). This has not been done yet for verbs signaling modality, as far as we are aware. For the particular category of AO-selecting verbs, Levin’s theory allows constructing verb classifications in a purely syntax-driven way, i.e. inducing semantic classes from syntactically defined 811 classes, and thus possibly also extending given classes using large corpora.1 While the appeal of Levin’s hypotheses is clear, we are aware of a major difficulty, making our approach a challenging research problem: it is very hard to grasp the precise meaning components which are to be associated with a syntactic “Levin” class. At the same time, it is vital to have a good semantic characterization of the meaning components in order to apply such classes to NLP tasks in an informed way. We address these issues and make the following contributions: (i) We consider a purely syntactic classification of more than 600 German AOselecting verbs and induce semantic classes based on findings from formal semantics about correspondences between verb syntax and meaning. This yields an initial description of the meaning components associated with the classes, along with a tentative class name. (ii) In a second step, we refine and extend the semantic characterization of the verb classes by translating it to English and linking it to existing semantic classes in lexical resources at the word sense level: we consider the coarse semantic fields in the German wordnet GermaNet (Kunze and Lemnitzer, 2002), the verb classes in the English lexicon VerbNet (Kipper et al., 2008), and the semantic frames in the English lexicon FrameNet (Baker et al., 1998). As a result, we obtain a detailed semantic characterization of the verb classes, as well as insights into the validity of Levin’s theory across the related languages German and English. (iii) We also perform a task-oriented evaluation of the verb classes in textual entailment recognition, making use of insights from the previous two steps. The results suggest that the verb classes might be a promising resource for this task, for German and for English. 2 Related Work This section summarizes related work about the correspondence between verb meaning and syntax and discusses related work on modality in NLP. Syntactic Reflections of Verb Meaning Semantic verb classifications that are grounded in lexical-syntactic properties of verbs are particularly appealing, because they can automatically be recovered in corpora based on syntactic features. The most well known verb classification 1Abstract objects already characterize the possible semantic roles to a certain extent. based on correspondences between verb syntax and verb meaning is Levin’s classification (Levin, 1993). According to Levin (2015a), verbs that share common syntactic argument alternation patterns also have particular meaning components in common, thus they can be grouped into a semantic verb class. For example, verbs participating in the dative alternation2 can be grouped into a semantic class of verbs sharing the particular meaning component “change of possession”, thus this shared meaning component characterizes the semantic class. Recent work on verb semantics provides additional evidence for this correspondence of verb syntax and meaning: Hartshorne et al. (2014) report that the syntactic behavior of some verbs can be predicted based on their meaning. VerbNet is a broad-coverage verb lexicon organized in verb classes based on Levin-style syntactic alternations: verbs with common subcategorization frames and syntactic alternation behavior that also share common semantic roles are grouped into VerbNet classes. VerbNet not only includes the verbs from the original verb classification by Levin, but also more than 50 additional verb classes (Kipper et al., 2006) automatically acquired from corpora (Korhonen and Briscoe, 2004). These classes contain many AO-selecting verbs that were not covered by Levin’s classification. However, VerbNet does not provide information about the modal meaning of AO-selecting verbs and does not reflect fine-grained distinctions between various kinds of modality. There is also some criticism in previous work regarding the validity of Levin’s approach. Baker and Ruppenhofer (2002) and Schnorbusch (2004) both discuss various issues with Levin’s original classification, in particular the difficulty to grasp the meaning components, which are to be associated with a Levin class. While approaches to exploit the syntactic behavior of verbs for the automatic acquisition of semantic verb classes from corpora have been developed in the past, they were used to recover only small verb classifications: Schulte im Walde (2006)’s work considered a semantically balanced set of 168 German verbs, Merlo and Stevenson (2001) used 60 English verbs from three particular semantic classes. In contrast to previous work, we consider a large 2These verbs can realize an argument syntactically either as noun phrase or as prepositional phrase with to. 812 set of more than 600 German AO-selecting verbs and focus on their modal meaning (i.e., expressing factuality or uncertainty). Related Work on Modality in NLP Previous work in NLP on the automatic (and manual) annotation of modality has often tailored the concept of modality to particular applications. Szarvas et al. (2012) introduce a taxonomy of different kinds of modality expressing uncertainty, such as deontic, bouletic, abilitative modality, and use it for detecting uncertainty in an Information Extraction setting. Their uncertainty cues also include verbs. Saur´ı and Pustejovsky (2012) use discrete values in a modality continuum ranging from uncertain to absolutely certain in order to automatically determine the factuality of events mentioned in text. Their automatic approach is based on the FactBank corpus (Saur´ı and Pustejovsky, 2009), a corpus of newswire data with manually annotated event mentions. For the factuality annotation of the event mentions, the human annotators were instructed to primarily base their decision on lexical cues. For example, they used verbs of belief and opinion, perception verbs, or verbs expressing proof. Nissim et al. (2013) introduce an annotation scheme for the cross-linguistic annotation of modality in corpora. Their annotation scheme defines two dimensions which are to be annotated (called layers): factuality (characterizing the embedded proposition or concept) and speaker’s attitude (characterizing the embedding predicate). Their annotation scheme starts from a fixed set of modal meanings and aims at finding previously unknown triggers of modality. However, some modal meanings are not distinguished, in particular those involving future-orientation. A classification approach grounded in syntax – as in our work – can be considered as complementary: it starts from the syntactic analysis of a large set of trigger words, and induces a broad range of modal meanings based on correspondences between verbs syntax and meaning. Our semantic classification for AO-selecting verbs covers a wide range of different kinds of modality in text, thus considerably extending previous work. 3 Inferring Semantic Verb Classes In this section, we infer semantic verb classes from the syntactic alternation behavior of a large dataset of German AO-selecting verbs. The research hypotheses underlying our method can be summarized as follows: There are correspondences between verb syntax and meaning: certain syntactic alternations correspond to particular meaning components (Levin, 2015a). 3.1 German Subcategorization Lexicon We consider a set of 637 AO-selecting verbs given in (Eckle-Kohler, 1999). These verbs are a subset of a subcategorization lexicon (i.e., pairs of lemma and subcategorization frame) that has automatically been extracted from large newspaper corpora using a shallow regular expression grammar covering more than 240 subcategorization frames (short: subcat frames). All the subcat frames extracted for a given verb were manually checked and only the correct ones were included in the final lexicon, because high quality lexical information was crucial in the target application Lexical Functional Grammar parsing.3 Eckle-Kohler (1999) specified the alternation behavior of each AO-selecting verb regarding different types of clausal and non-finite arguments, yielding a syntactic signature for each verb (e.g., 111101 for the verb einsehen (realize) using the encoding in Table 1, top to bottom corresponding to left to right).4 For this, each verb was inspected regarding its ability to take any of the considered clausal and non-finite constructions as argument – either on the basis of the automatically acquired subcat frames or by making use of linguistic introspection. Linguistic introspection is necessary to reliably identify non-possible argument types, since missing subcat frames that were not extracted automatically are not sufficient as evidence. Although there are 64 possible syntactic signatures according to basic combinatorics, in the data only 46 signatures were found, which group the verbs into 46 classes. While Eckle-Kohler (1999) points out a few semantic characteristics of these classes, most of them lack a semantic characterization. Our goal is to address this gap and to infer shared meaning components for all the classes. 3Today, this lexicon is part of the larger resource “IMSLex German Lexicon” (Fitschen, 2004). 4The automatically extracted subcategorization lexicon also contains adjectives and nouns taking clausal or infinitival arguments. However, many of the 1191 nouns and 666 adjectives are derived from verbs, which makes them the central word class. 813 Argument Type Y/N Example daß(that)-clause 1/0 sehen (see) zu(to)-infinitive, present 1/0 versuchen (try) zu(to)-infinitive, past 1/0 bereuen (regret) wh-clause 1/0 einsehen (realize) ob(whether/if)-clause 1/0 fragen (ask) declarative clause 1/0 schreien (shout) Table 1: Clausal and infinitival arguments distinguished in the syntactic classification; possibility of each type is encoded as 1 (possible) or 0 (not possible). For this, we use linguistic research findings as described in the next section. 3.2 Findings from Formal Semantics We employ the following findings on correspondences between verb meaning and syntax in order to infer semantic classes from the syntactic signatures. This gives also rise to tentative names (labels) for the corresponding meaning components. Factuals: the that-wh and the that-wh/if alternation. Verbs that are able to alternatively take that and wh-clauses coerce the embedded interrogative and declarative clauses into factual AOs, corresponding to a particular fact (Ginzburg, 1996). Among the verbs showing the that-wh alternation are the well-known factive verbs (Kiparsky and Kiparsky, 1970) (e.g., She proves that she exists. vs. She proves who she is. vs. He proves whether he can mine gold.). There is a further distinction among these verbs regarding the ability to take an embedded if/whether-question: Schwabe and Fittler (2009) show that the that-wh/if alternation is connected to objective verbs entailing the existence of an independent witness, whereas the that-wh alternation (i.e., an if/whether-question is not possible) occurs with non-objective verbs (e.g., He regrets whom he ended up with. vs. ⋆He regrets whether he ended up playing this game.). “Aspectuals”: the inability to take thatclauses and to-infinitives in the past tense. Recently, linguistic research has increasingly addressed particular semantic aspects of toinfinitives. Kush (2011) has investigated AOs that can neither be realized as that-clause nor as toinfinitive in the past tense (e.g., She hesitates to answer. vs. ⋆She hesitates to have answered.7 vs. 7This is the literal translation of the German equivalent to English. In English, the ing-form in the past would be more ⋆She hesitates that ...) These AOs are selected by control verbs8 and can be characterized as mental actions. Kush (2011) points out that the verbs selecting those AOs have an aspectual meaning in common. Future orientation: to-infinitives in the present tense and the inability to take toinfinitives in the past tense. Laca (2013) has investigated verbs across English and Spanish that embed future-oriented AOs. Only future-oriented AOs can be used with future-oriented adverbials, such as tomorrow, and these AOs are often realized as non-finite constructions, e.g., to-infinitives. She points out that not only control verbs take future-oriented AOs, but also verbs expressing attitudes of preference. This finding implies that such future-oriented AOs are typically incompatible with past-oriented adverbials (e.g., yesterday) and verb forms in the past tense (e.g., ⋆She plans having finished the assignment yesterday.). 3.3 Mapping to Meaning Components We automatically infer semantic classes based on a manually constructed mapping between the syntactic signatures from Eckle-Kohler (1999) and the meaning components grounded in syntax summarized in Section 3.2.9 We constructed this mapping in two steps: In a first step, the signatures are aligned to the meaning components from Section 3.2 based on substrings of the signatures: future-orientation matches the 110 prefix, aspectual the 010 prefix, and factuality matches 1’s in fourth or fifth position. It is important to point out that future-orientation can be combined with factuality: this corresponds to an independent matching of the 110 prefix and the factuality substring. While this combination may seem contradictory, it reflects the lexical data and shows that also weak forms of factuality (“it will most likely be factual at some point in the future”) are expressed in language. In a second step, the pre-aligned signatures are merged, if the remaining slots of the signature are either 1 or 0 (i.e. the respective argument types can or can not occur); in the resulting merged sigtypical instead of a to-infinitive in the past tense. 8“Control” refers to the co-reference between the implicit subject of the infinitival argument and syntactic arguments in the main clause, either the subject (subject control) or direct object (object control). 9We did not consider verbs can be used with all kinds of clausal and infinitival arguments, such as the majority of communication verbs (e.g., comment, whisper). 814 signature #verbs – examples meaning components semantic characterization (#linked verbs) 010 --36 (6%) – wagen (dare), z¨ogern (hesitate), weigern (refuse) aspectual: verbs expressing the ability of doing an action VN (2): consider-29.9, wish-62; FN (2): purpose, cogitation 110 0-195 (31%) – anbieten (offer), empfehlen (recommend), fordern (demand) future-oriented: verbs marking AOs as anticipated, planned VN (89): force-59, forbid-67, wish-62, promote102, urge-58.1, order-60, admire-31.2, order-60, promise-37.13 ; FN (43): request, preventing 000 1115 (2%) – nachfragen (inquire), anfragen (ask) interrogative: verbs marking AOs as under investigation VN (3): estimate-34.2, inquire-37.1.2, order-60; FN (1): questioning, request 111 1-122 (19%) – bedauern (regret), ¨uberwinden (overcome), danken (thank) wh-factual: opinion verbs marking AOs as factual VN (45): transfer-mesg-37.1.1, wish-62, admire-31.2, complain-37.8, conjecture-29.5, say-37.7; FN (18): statement, reveal-secret 110 1030 (5%) – bef¨urworten (approve), verteidigen (defend), loben (praise) future-oriented wh-factual: opinion verbs marking AOs as future-oriented and factual VN (15): admire-31.2, allow-64, transfer-mesg37.1.1, suspect-81, characterize-29.2 , neglect75, want-32.1, defend-85, comprehend-87.2; FN (10): judgment, grant-permission, defend, experiencer-focus, judgment-communication, justifying, hit-or-miss, statement, reasoning, tolerating, grasp 1-- 11120 (19%) – beschreiben (describe), h¨oren (hear), erinnern (remember) wh/if -factual: objective verbs marking AOs as factual VN (55): discover-84, say-37.7, see-30.1, comprehend-87.2, rely-70, seem-109, consider29.9, transfer-mesg-37.1.1, estimate-34.2, inquire-37.1.2; FN (23): perception-experience, statement, cogitation, grasp 110 1148 (8%) – festlegen (determine), absch¨atzen (assess), lehren (teach) future-oriented wh/iffactual: objective verbs marking AOs as futureoriented and factual VN (28): estimate-34.2, rely-70, indicate78, transfer-mesg-37.1.1, correspond-36.1, conjecture-29.5, discover-84, say-37.7; FN (16): predicting, education-teaching, assessing, reliance, reasoning 111 0-66 (10%) – vorwerfen (accuse), bestreiten (deny), f¨urchten (fear) non-factual: verbs marking AOs as not resolvable re. their factuality VN (28): conjecture-29.5, wish-62, complain37.8, admire-31.2; FN (13): statement, revealsecret, experiencer-focus, certainty Table 2: The 632 verbs in 8 semantic classes (5 verbs show idiosyncratic behavior). Signature substrings in bold correspond to meaning components, which (along with tentative class names) are based on Sec. 3.2. The cross-lingual semantic characterization shows aligned VerbNet (VN) classes covering 265 (42%) verbs and aligned FrameNet (FN) frames covering 126 (20%) verbs, see Sec. 4.1.6 nature, these slots are left underspecified. Merging the signatures in this way yields 8 partially underspecified signatures which correspond to the final semantic classes. This procedure covers more than 99% of the 637 verbs under investigation: only 5 verbs showed idiosyncratic syntactic behavior, 4 of those are verbs that can take an AO as subject (e.g., bedeuten (mean)). As a consequence of the automatic part of this procedure, every verb is assigned to exactly one class – a simplification which we plan to resolve as part of future work. Table 2 provides an overview and a characterization of these classes, also showing the final signatures and their substrings which correspond to the meaning components. The non-factual class is derived from the wh-factual class: the only difference is the inability to take a wh-clause (e.g. ⋆He hopes, when he will succeed.). While the descriptions of the meaning components and the class names are inspired from research in linguistics (typically a very deep analysis of only few verbs), transferring them to our verb resource – which is of much larger scale – inevitably leads to outlier verbs in the classes, e.g., verbs that do not strictly match the class label. Examples include verbs such as ¨uberlegen (consider) in the wh/if-factual class (not covering the future-oriented meaning component) or schaden (harm) as non-factual rather than as wh-factual. For this reason, and also because of the assignment of highly polysemous verbs to only one class, the definitions of meaning components and the class names should rather be considered as loose, providing a first tentative semantic characterization of the modality classes. In sum, this section presented an inventory 815 of modal meaning components that we primarily synthesized from research in linguistics. The classification work is strictly grounded in syntactic properties of the verbs and was not targeted a priori at modal meanings. 4 Evaluation 4.1 Linking to Semantic Classes Our first set of experiments aims at refining the initial semantic characterization of the classes by linking them to independently constructed semantic classifications at the word sense level. Specifically, we consider three different semantic classifications from computational lexicons, which have been created by linguistic experts: (i) the so-called semantic fields in GermaNet, grouping verb senses into 15 coarse classes, such as perception, emotion, (ii) the verb classes given in VerbNet, and (iii) the Frame-semantic frames in FrameNet. As the GermaNet and FrameNet classes are based on different lexicographic and linguistic theories, we expect an additional semantic characterization from the linking. The VerbNet classes, which also follow Levin’s hypotheses, however, are used to investigate if the syntax-semantics correspondence is maintained across the related languages German and English. For this linking experiment, we used the UBY framework (Gurevych et al., 2012)10, containing standardized versions of the above lexicons, as well as a linking between VerbNet and FrameNet on the word sense level. Approach In order to link our classes to verb senses in GermaNet and VerbNet, we developed an automatic linking method based on subcat frame similarity. Recognizing subcat frame similarity requires a common standardized format for the otherwise incomparable frames. UBY provides such a standardized format which has been presented in detail by Eckle-Kohler and Gurevych (2012). It represents subcat frames uniformly across German and English, and at a fine-grained level of individual syntactic arguments. Our linking approach is based on the following hypothesis: Two verb senses with equivalent lemmas are equivalent, if they have similar subcat frames.11 Our method interprets the pairs of verb and sub10http://www.ukp.tu-darmstadt.de/uby/ 11This approach is applicable for GermaNet, because GermaNet contains fine-grained syntactic subcat frames. cat frame listed in our classification12 as senses. While we do not claim that this hypothesis is sufficient in general, i.e., for all verb senses, we found that it is valid for the subset of senses belonging to the class of AO-selecting verbs. The cross-lingual linking of our classes to VerbNet senses requires an additional translation step, which we describe first. Manual Translation While UBY also provides translations between German and English verb senses, e.g., as part of the Interlingual Index from EuroWordnet (ILI), we found that many of the translations were not present in our target lexicon VerbNet. Therefore, the main author of this paper, a native speaker of German with a good proficiency in English, translated the AO-compatible verbs (i.e., word senses) manually using Linguee13 and dictionaries. This took about 7 hours. For 23 German verbs, we could not find any equivalent lexicalized translation, because these verbs express very fine-grained semantic nuances. For example, we did not find an equivalent English verb for a few verbs in the aspectual class but only a translation consisting of an adjective in combination with to be. Examples include be easy (leichtfallen), be willing (sich bereitfinden), be capable (verm¨ogen), which have German equivalents that are lexicalized as verbs. As a result, we arrived at translations for 614 out of 637 German verbs. These 614 German verbs are translated to 413 English verbs, indicating that the English translation has a more general meaning in many cases. Automatic Verb Sense Linking Our algorithm links a German verb sense (or its English translation) with a GermaNet (or VerbNet) sense, if the subcat frames of both verb senses have the same number of arguments and if the arguments have certain features in common.14 For example, to create a link to GermaNet, features such as the complementizer of clausal arguments and the case of noun phrase arguments have to agree. In a similar way, the linking to VerbNet is based on a comparison of German subcat frames and English subcat 12We consider only verb senses that are compatible with AOs, as indicated by subcat frames with clausal or non-finite arguments. 13Linguee (http://www.linguee.de/) is a translation tool combining an editorial dictionary and a search engine processing bilingual texts. In particular, it provides a large variety of contextual translation examples. 14We do not link the subcat frames, but we do compare them across the related languages German and English to determine their similarity in the context of linking. 816 frames – which are represented uniformly across German and English. In Section A.2, we provide more details about the algorithm. Results According to a manual evaluation of a random sample of 200 sense pairs, the automatic verb sense linking yielded an accuracy of 89.95% for the linking to GermaNet, and 87.54% for the linking to VerbNet (κ agreement on the sample annotated by two annotators was 0.7 and 0.8, respectively). The main types of errors in the linking to GermaNet and VerbNet are due to specific syntactic features of the subcat frames which diverge and are not considered in the automatic linking. The differences regarding these specific features are due to cross-lingual differences (VerbNet, e.g., verb phrase arguments with ing-form) and diverging linguistic analyses of particular constructions (GermaNet, e.g., constructions with es (it)), see also Eckle-Kohler and Gurevych (2012). By linking the verbs in our classification to semantic classes in GermaNet, VerbNet and FrameNet, we obtain a three-way semantic characterization of our classes. The linking to the GermaNet semantic fields covers 270 (43%) of the source verbs. Of these, 219 (81%) are linked to the three semantic fields cognition, communication and social. Fewer verbs (32 (12%)) are linked to the semantic fields emotion, perception, change. Semantic fields not among the target classes are consumption, competition, contact, body and weather. Table 2 summarizes the linking to VerbNet and FrameNet and shows how many verbs from each source class could be linked to any of the classes in VerbNet or FrameNet.15 As the class distribution of the verb subsets covered by our linking-based evaluation is similar as for the original classes, we consider our evaluation as valid, although less than 50% of all verbs could be evaluated this way. The target classes in VerbNet and FrameNet reveal meaning components that are on the one hand unique for individual classes, and on the other hand shared across several German classes. The future-oriented class contains object control verbs (e.g., force-59, forbid-67 in VerbNet, and request, preventing in FrameNet). The wh/iffactual class is unique regarding the cognition and perception verbs (e.g., discover-84, see-30.1-1, and perception-experience). The future-directed 15Based on the percentage of source class members linked to any of the target classes, we only display target classes with an overlap of at least 1.8% due to space constraints. Verb class Wiki Web News News Eng. all 25.85 50.58 33.91 25.31 aspectual 0.90 0.80 1.44 1.96 future-oriented 9.45 23.04 13.65 12.58 interrogative 0.01 0.05 0.05 0.65 wh-factual 4.26 17.89 4.99 3.48 fo. wh-factual 0.29 0.28 0.85 1.14 wh/if -factual 3.02 2.54 3.53 5.20 fo. wh/if-factual 2.36 1.77 3.14 5.75 non-factual 4.29 3.36 4.84 3.57 Table 3: Percentage of classes in corpora: German Wikipedia (Wiki), SDeWaC (Web), Tiger (News); English Reuters corpus (News Eng.). wh/if-factual class also contains objective assessment verbs, as shown by the estimate-34.2 class. The verbs in the two wh-factual classes share meaning components as well, as shown by the opinion verb classes admire-31.2 and defend-85 in VerbNet or judgment, tolerating in FrameNet. While there are also other VerbNet and FrameNet classes shared across several classes, they turned out to be very general and underspecified regarding their meaning, thus not contributing to a more fine-grained semantic characterization. For example, the conjecture-29.5 class assembles quite diverse conjecture verbs, e.g. verbs expressing opinion (feel, trust) and factuality (observe, discover). A similar observation holds for the statement frame in FrameNet. 4.2 Analysis of Frequency and Polysemy In order to assess the usefulness of the verb resource for NLP tasks, we determined the lemma frequency of all verbs in the 8 classes in SDeWaC (Faaß and Eckart, 2013), a cleaned version of the German DeWaC corpus (Baroni and Kilgarriff, 2006). A ranking of the verbs according to their lemma frequency showed that 89% of the verbs occur more than 50 times in SDeWaC.16 We also analyzed the frequency distribution of the 8 verb classes in two other German corpora belonging to different genres, and also for English, see Table 3:17 encyclopedic text (the German Wikipedia18), German newspaper text (the Tiger corpus (Brants et al., 2004)), and the English 16In the verb resource we provide for download, we included this frequency information in order to enable frequency-based filtering. 17Details of the computation of the verb lemma frequency lists are given in the appendix A.1. 18www.wikipedia.de, dump of 2009-06-18 817 Reuters-21578 corpus.19 Table 3 shows that the large verb classes constitute a substantial proportion of verb occurrences across different genres. This suggests that the verb classes might be useful features for various text classification tasks. We performed a further analysis of the polysemy of the German and English verbs in our classes relative to several fine and coarse word sense inventories. Regarding GermaNet, there are 2.28 senses per verb (1.53 for all GermaNet verbs), whereas WordNet lists 5.11 senses per verb (2.17 for all WordNet verbs). In VerbNet, we find 1.74 senses per verb (1.42 for all VerbNet verbs), and in FrameNet 1.96 (1.52 for all FrameNet verbs). This analysis shows that the task of automatic sense linking is particularly hard for the category of AO-selecting verbs we consider. Whether the polysemy is an issue for any application where the verb classes are used as features is not a priori clear and depends on the task at hand. 4.3 Textual Entailment Experiment For an extrinsic evaluation, we investigated the usefulness of the German and the English verb classes as features in recognizing textual entailment (RTE). In RTE, the task is to determine whether for a pair of text fragments – the text T and the hypothesis H – the meaning of H is entailed by T (Dagan et al., 2006); for non-entailing pairs, sometimes a further category “unknown” is used as a label. We employed a simple classification-based approach to RTE and trained and evaluated a Naive Bayes classifier on the test sets of three RTE benchmarks, using 10-fold cross validation: the English RTE-3 data (Giampiccolo et al., 2009) and their German translation20 (the development sets and the test sets each consist of 800 pairs), and an expanded version of the English RTE-3 data from the Sagan Textual Entailment Test Suite (Castillo, 2010) consisting of 2974 pairs. While the German dataset provides a two-way classification of the T-H pairs, the two English datasets provide a three-way classification, also using the “unknown” label. We used the DKPro TC framework (Daxenberger et al., 2014) for classification and applied POS tagging and lemmatization as preprocessing. 19Reuters-21578, Distribution 1.0, see http:// kdd.ics.uci.edu/databases/reuters21578/ reuters21578.html. 20http://www.dfki.de/˜neumann/ resources/RTE3_DE_V1.2_2013-12-02.zip As a baseline feature, we use the word overlap measure between two T-H pairs (no stopword filtering, no lemmatization, no normalization of overlap score), which is quite competitive on the RTE-3 data, because this dataset shows a high difference in word overlap between positive (entailment) and negative (no entailment) pairs (Bentivogli et al., 2009). An analysis of the development set of the German RTE-3 data showed that 62% of the pairs contain at least one occurrence of any of the verbs from the classification in either T or H. However, T and H fragments display no statistically significant differences21 regarding the occurrences of any of the verb classes. A detailed analysis revealed that pairs without entailment are often characterized by a mismatch between T and H regarding the presence of factuality markers. For example, the presence of verbs indicating uncertainty (all classes apart from whfactual and wh/if-factual) in T and an absence of such verbs in H might indicate non-entailment as in the following not entailing pair from the English RTE3 development set where “long” signals nonfactuality, but “researching” signals factuality: T: The BBC’s Americas editor Will Grant says many Mexicans are tired of conflict and long for a return to normality. H: Will Grant is researching a conflict with Mexicans. Thus, an insufficient overlap of modality markers in T and H might actually indicate non-entailment, but lead to an incorrect classification as entailment when considering only word overlap. Accordingly, we implemented a factualitymismatch feature both for German and for English, based on our new German and English classes. This feature is similar to the word overlap feature but with lemmatization and normalization of overlap score. Verb class counts are based on verb lemma counts of the member verbs; for English verbs that are members of more than one class, we included all verb classes in our factuality-mismatch feature.22 Table 4 shows the results. While the differences 21All significance scores in this paper are based on Fisher’s exact test at significance level p<0.05. 22In the German part, every verb is assigned to one class, while the translation to English resulted in 22% of the English verbs being members in more than one class. However, only 11% of the multiple class assignments involve a combination of factual and uncertainty classes. 818 for RTE-3 DE and RTE-3 EN are not statistically significant, the factuality-mismatch feature yielded a small but significant improvement on the expanded RTE-3 EN dataset. This is due to the different nature of the expanded RTE dataset, which was created using a paraphrasing technique. As a result, the number of occurrences of verbs from our classes increased, and the factualitymismatch became a discriminative feature for distinguishing between CONTRADICTION and UNKNOWN/ENTAILMENT. Considering the fact that we employed only simple overlap features that do not rely on dependency parsing and did not perform any word sense disambiguation, these results suggest that the verb classes might be promising features for RTE, both for German and English. As factuality can be expressed by a variety of further linguistic means, including modal verbs, negation, tense and certain adverbs, investigating the combination of our verb classes with other modality signals might be especially promising as part of future work. RTE-3 DE RTE-3 EN RTE-3 EN exp. WO 59.87 54.75 54.98 WO+FM 59.25 54.62 58.81 Table 4: Accuracy of a Naive Bayes classifier (10fold cross validation on the test sets) with word overlap (WO) and additional factuality-mismatch (WO+FM) features. 5 Results and Discussion Our construction of semantic classes from the syntactic behavior of AO-selecting verbs results in an inventory of modal meanings that emerged from a large lexical resource. The main result of the linking based evaluation is a detailed semantic characterization of the inferred classes – a prerequisite for using them in NLP tasks in an informed way. The semantic classes seem to be particular suited for tasks related to opinion analysis, textual inference, or argumentation mining. In this context, the relationship between our large resource of lexical verbs and the closed class of modal verbs might be an interesting question for future research. Most of all, the linking to GermaNet and FrameNet shows that it is indeed possible to narrow down meaning components for Levin classes. Moreover, the results of the linking to VerbNet also provide support for Levin’s hypothesis that the correspondences between verb syntax and meaning described for English largely apply to the related language German as well (Levin, 2015b). The English version of the semantic classes which we created by means of translation has the same semantic properties as the German classes. However, the syntactic properties of the English classes are not fully specified, because English has additional kinds of non-finite arguments, such as ing-forms or bare infinitives. Therefore, it might be interesting to address this question in the future and to build a similar semantic classification for English from scratch, in particular in the context of extracting modality classes from corpora. This would require an adaptation of the syntactic signatures, considering the various kinds of nonfinite arguments particular to English. Based on large subcategorization lexicons available for English (e.g. COMLEX (Grishman et al., 1994) or VerbNet), it should be feasible to derive such signatures and to construct a mapping of signatures to modality aspects in a similar way as for German. The question whether the syntactic signatures can be recovered in large corpora is particularly interesting, because this would allow extending the existing classes and to also acquire AO-selecting adjectives and nouns. We plan to investigate this question as part of future work. 6 Conclusion We inferred semantic classes from a large syntactic classification of German AO-selecting verbs based on findings from formal semantics about correspondences between verb syntax and meaning. Our thorough evaluation and analysis yields detailed insights into the semantic characteristics of the inferred classes, and we hope that this allows an informed use of the resulting resource in various semantic NLP tasks. Acknowledgments This work has been supported by the Volkswagen Foundation as part of the LichtenbergProfessorship Program under grant No. I/82806 and by the German Research Foundation under grant No. GU 798/17-1 and No. GRK 1994/1. We thank the anonymous reviewers for their valuable comments. Additional thanks go to Anette Frank, Iryna Gurevych and Ani Nenkova for their helpful feedback on earlier versions of this work. 819 References Nicholas Asher. 1993. Reference to Abstract Objects in Discourse. Studies in Linguistics and Philosophy (Book 50). Springer. Collin F. Baker and Josef Ruppenhofer. 2002. FrameNet’s Frames vs. Levin’s Verb Classes. In Proceedings of 28th Annual Meeting of the Berkeley Linguistics Society, pages 27–38, Berkeley, CA, USA. Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet Project. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics (COLING-ACL), pages 86–90, Montreal, Canada. Marco Baroni and Adam Kilgarriff. 2006. Large Linguistically-Processed Web Corpora for Multiple Languages. In Proceedings of the Eleventh Conference of the European Chapter of the Association for Computational Linguistics (EACL), pages 87– 90, Trento, Italy. Luisa Bentivogli, Bernardo Magnini, Ido Dagan, Hoa Trang Dang, and Danilo Giampiccolo. 2009. The Fifth Pascal Recognizing Textual Entailment Challenge. In Proceedings of the Text Analysis Conference (TAC), pages 14–24, Gaithersburg, Maryland, USA. Sabine Brants, Stefanie Dipper, Peter Eisenberg, Silvia Hansen, Esther K¨onig, Wolfgang Lezius, Christian Rohrer, George Smith, and Hans Uszkoreit. 2004. TIGER: linguistic interpretation of a German corpus. Research on Language and Computation, 2(4):597–620. Julio J. Castillo. 2010. Using Machine Translation Systems to Expand a Corpus in Textual Entailment. In Hrafn Loftsson, Eirkur Rgnvaldsson, and Sigrn Helgadttir, editors, Advances in Natural Language Processing, volume 6233 of Lecture Notes in Computer Science, pages 97–102. Springer, Berlin Heidelberg. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL Recognising Textual Entailment Challenge. In Joaquin Quionero-Candela, Ido Dagan, Bernardo Magnini, and Florence dAlch Buc, editors, Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment, volume 3944 of Lecture Notes in Computer Science, pages 177– 190. Springer Berlin Heidelberg. Johannes Daxenberger, Oliver Ferschke, Iryna Gurevych, and Torsten Zesch. 2014. Dkpro tc: A java-based framework for supervised learning experiments on textual data. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 61–66, Baltimore, MD, USA. Marie-Catherine de Marneffe, Christopher D. Manning, and Christopher Potts. 2012. Did It Happen? The Pragmatic Complexity of Veridicality Assessment. Computational Linguistics, 38(2):301–333, June. Richard Eckart de Castilho and Iryna Gurevych. 2014. A Broad-Coverage Collection of Portable NLP Components for Building Shareable Analysis Pipelines. In Proceedings of the Workshop on Open Infrastructures and Analysis Frameworks for HLT (OIAF4HLT) at COLING 2014, pages 1–11, Dublin, Ireland. Judith Eckle-Kohler and Iryna Gurevych. 2012. Subcat-LMF: Fleshing Out a Standardized Format for Subcategorization Frame Interoperability. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics (EACL), pages 550–560, Avignon, France. Judith Eckle-Kohler, Michael Kohler, and Jens Mehnert. 2008. Automatic recognition of german news focusing on future-directed beliefs and intentions. Computer Speech and Language, 22(4):394–414, October. Judith Eckle-Kohler. 1999. Linguistisches Wissen zur automatischen Lexikon-Akquisition aus deutschen Textcorpora. Logos-Verlag, Berlin, Germany. PhD Thesis, Universit¨at Stuttgart, Germany. Gertrud Faaß and Kerstin Eckart. 2013. SdeWaC – A Corpus of Parsable Sentences from the Web. In Iryna Gurevych, Chris Biemann, and Torsten Zesch, editors, Language Processing and Knowledge in the Web: Proceedings of the 25th Conference of the German Society for Computational Linguistics (GSCL 2013), Darmstadt, Germany, September 2527, 2013., pages 61–68. Springer, Berlin, Heidelberg. Arne Fitschen. 2004. Ein Computerlinguistisches Lexikon als komplexes System. PhD Thesis, Universit¨at Stuttgart, Germany. Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2009. The Third PASCAL Recognizing Textual Entailment Challenge. In Proceedings of the Workshop on Textual Entailment and Paraphrasing at ACL 2009, pages 1–9, Prague, Czech Republic. Jonathan Ginzburg. 1996. Interrogatives: Questions, Facts, and Dialogue. In Shalom Lappin, editor, The Handbook of Contemporary Semantic Theory, pages 385–422. Blackwell, Oxford, UK. Ralph Grishman, Catherine Macleod, and Adam Meyers. 1994. Comlex Syntax: Building a Computational Lexicon. In Proceedings of the 15th International Conference on Computational Linguistics (COLING), pages 268–272, Kyoto, Japan. 820 Iryna Gurevych, Judith Eckle-Kohler, Silvana Hartmann, Michael Matuschek, Christian M. Meyer, and Christian Wirth. 2012. UBY - A Large-Scale Unified Lexical-Semantic Resource Based on LMF. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics (EACL), pages 580–590, Avignon, France. Valentine Hacquard. 2011. Modality. In Claudia Maienborn, Klaus von Heusinger, and Paul Portner, editors, Semantics: An International Handbook of Natural Language Meaning. HSK 33.2, pages 1484– 1515. Berlin: Mouton de Gruyter. Joshua K. Hartshorne, Claire Bonial, and Martha Palmer. 2014. The VerbCorner Project: Findings from Phase 1 of Crowd-Sourcing a Semantic Decomposition of Verbs. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL), pages 397–402, Baltimore, MD, USA. Lauri Karttunen. 1971. Implicative Verbs. Language, pages 340–358. Lauri Karttunen. 2012. Simple and Phrasal Implicatives. In *SEM 2012: The First Joint Conference on Lexical and Computational Semantics, pages 124– 131, Montr´eal, Canada. Paul Kiparsky and Carol Kiparsky, 1970. Fact. Mouton, The Hague. Karin Kipper, Anna Korhonen, Neville Ryant, and Marthe Palmer. 2006. Extending VerbNet with Novel Verb Classes. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC), pages 1027–1032, Genoa, Italy. Karin Kipper, Anna Korhonen, Neville Ryant, and Martha Palmer. 2008. A Large-scale Classification of English Verbs. Language Resources and Evaluation, 42:21–40. Anna Korhonen and Ted Briscoe. 2004. Extended Lexical-Semantic Classification of English Verbs. In Proceedings of the Workshop on Computational Lexical Semantics at HLT-NAACL 2004, pages 38– 45, Boston, Massachusetts, USA. Ralf Krestel, Sabine Bergler, and Ren Witte. 2008. Minding the Source: Automatic Tagging of Reported Speech in Newspaper Articles. In Nicoletta Calzolari et al., editor, Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC), pages 2823–2828, Marrakech, Morocco. Claudia Kunze and Lothar Lemnitzer. 2002. GermaNet – Representation, Visualization, Application. In Proceedings of the 3rd International Conference on Language Resources and Evaluation (LREC), pages 1485–1491, Las Palmas, Canary Islands, Spain. Dave Kush. 2011. Mental Action and Event Structure in the Semantics of ‘try’. In Proceedings of the 21st Semantics and Linguistic Theory Conference, pages 413–425, New Brunswick, New Jersey, USA. Brenda Laca. 2013. Temporal Orientation and the Semantics of Attitude Verbs. In Karina Veronica Molsing and Ana Maria Tramunt Iba˜nos, editors, Time and TAME in Language, pages 158–180. Cambridge Scholars Publishing, Newcastle upon Tyne, UK. Beth Levin. 1993. English Verb Classes and Alternations. The University of Chicago Press, Chicago, USA. Beth Levin. 2015a. Semantics and Pragmatics of Argument Alternations. Annual Review of Linguistics, 1(1):63–83. Beth Levin. 2015b. Verb Classes Within and Across Languages. In Andrej Malchukov and Bernard Comrie, editors, Valency Classes in the Worlds Languages (Volume 2): Case Studies from Austronesia, the Pacific, the Americas, and Theoretical Outlook, pages 1627–1670. Berlin, Boston: De Gruyter Mouton. Amnon Lotan, Asher Stern, and Ido Dagan. 2013. TruthTeller: Annotating Predicate Truth. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 752–757, Atlanta, Georgia. Paola Merlo and Suzanne Stevenson. 2001. Automatic Verb Classification Based on Statistical Distributions of Argument Structure. Computational Linguistics, 27(3):373–408, September. Rowan Nairn, Cleo Condoravdi, and Lauri Karttunen. 2006. Computing Relative Polarity for Textual Inference. Inference in Computational Semantics (ICoS-5), pages 20–21. Malvina Nissim, Paola Pietrandrea, Andrea Sanso, and Caterina Mauri. 2013. Cross-Linguistic Annotation of Modality: a Data-Driven Hierarchical Model. In Proceedings of the 9th Joint ISO - ACL SIGSEM Workshop on Interoperable Semantic Annotation, pages 7–14, Potsdam, Germany. Vinodkumar Prabhakaran, Owen Rambow, and Mona Diab. 2010. Automatic Committed Belief Tagging. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING), pages 1014–1022, Beijing, China. Roser Saur´ı and James Pustejovsky. 2007. Determining Modality and Factuality for Text Entailment. In Proceedings of the International Conference on Semantic Computing, ICSC ’07, pages 509–516, Washington, DC, USA. IEEE Computer Society. Roser Saur´ı and James Pustejovsky. 2009. FactBank: a Corpus Annotated with Event Factuality. Language Resources and Evaluation, 43(3):227–268. 821 Roser Saur´ı and James Pustejovsky. 2012. Are You Sure That This Happened? Assessing the Factuality Degree of Events in Text. Computational Linguistics, 38(2):261–299, June. Roser Saur´ı, Robert Knippen, Marc Verhagen, and James Pustejovsky. 2005. Evita: A Robust Event Recognizer for QA Systems. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT ’05, pages 700–707, Vancouver, British Columbia, Canada. Roser Saur´ı. 2008. A Factuality Profiler for Eventualities in Text. PhD Thesis, Brandeis University, Waltham, MA, USA. Daniel Schnorbusch. 2004. Semantische Klassen aus syntaktischen Klassen? In Stefan Langer and Daniel Schnorbusch, editors, Semantik im Lexikon, pages 33–58. Gunter Narr Verlag, T¨ubingen. Sabine Schulte im Walde. 2006. Experiments on the Automatic Induction of German Semantic Verb Classes. Computational Linguistics, 32(2):159– 194, June. Kerstin Schwabe and Robert Fittler. 2009. Semantic Characterizations of German Question-Embedding Predicates. In Peter Bosch, David Gabelaia, and J´erˆome Lang, editors, Logic, Language, and Computation, volume 5422 of Lecture Notes in Computer Science, pages 229–241. Springer Berlin Heidelberg. Gy¨orgy Szarvas, Veronika Vincze, Rich`ard Farkas, Gy¨orgy Mra, and Iryna Gurevych. 2012. CrossGenre and Cross-Domain Detection of Semantic Uncertainty. Computational Linguistics, 38(2):335– 367, June. A Supplemental Material A.1 Verb Lemma Frequency List In order to count the occurrences of verb lemmas in the German corpus SDeWaC, we used a reader and pre-processing components (i.e., the LanguageTool segmenter and the TreeTagger for POS tagging and lemmatization) from the DKPro Core collection (Eckart de Castilho and Gurevych, 2014). From DKPro Core, we also used a component that detects separated particles of German particle verbs and replaces the lemma of the verb base form annotated by the TreeTagger by the true lemma of the particle verb. Our verb lemma counting pipeline is available at github.com/UKPLab/ acl2016-modality-verbclasses. Sense linking based on subcategorization frames get lexical entry les of source verb vs get equivalent verb vt in target lexicon get lexical entry let of target verb vt forall frame fi in les get listOfArguments li of fi forall frame fj in let get sense sj of frame fj get listOfArguments lj of fj if size(li) = size(lj) AND features(li) = features(lj) link (vs, fi) and sj end if end for end for Table 5: Algorithm for verb sense linking. A.2 Verb Sense Linking For the linking-based evaluation, we used UBY (version 0.7.0) versions of the following three resources: the German wordnet GermaNet (version 9.0), the English lexicons VerbNet (version 3.2) and FrameNet (version 1.5). The algorithm for cross-lingual verb sense linking is given in pseudo-code in Table 5. The implementation is available at github.com/UKPLab/ acl2016-modality-verbclasses. 822
2016
77
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 823–833, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Tree-to-Sequence Attentional Neural Machine Translation Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka The University of Tokyo, 3-7-1 Hongo, Bunkyo-ku, Tokyo, Japan {eriguchi, hassy, tsuruoka}@logos.t.u-tokyo.ac.jp Abstract Most of the existing Neural Machine Translation (NMT) models focus on the conversion of sequential data and do not directly use syntactic information. We propose a novel end-to-end syntactic NMT model, extending a sequenceto-sequence model with the source-side phrase structure. Our model has an attention mechanism that enables the decoder to generate a translated word while softly aligning it with phrases as well as words of the source sentence. Experimental results on the WAT’15 Englishto-Japanese dataset demonstrate that our proposed model considerably outperforms sequence-to-sequence attentional NMT models and compares favorably with the state-of-the-art tree-to-string SMT system. 1 Introduction Machine Translation (MT) has traditionally been one of the most complex language processing problems, but recent advances of Neural Machine Translation (NMT) make it possible to perform translation using a simple end-to-end architecture. In the Encoder-Decoder model (Cho et al., 2014b; Sutskever et al., 2014), a Recurrent Neural Network (RNN) called the encoder reads the whole sequence of source words to produce a fixedlength vector, and then another RNN called the decoder generates the target words from the vector. The Encoder-Decoder model has been extended with an attention mechanism (Bahdanau et al., 2015; Luong et al., 2015a), which allows the model to jointly learn the soft alignment between the source language and the target language. NMT models have achieved state-of-the-art results in English-to-French and English-to-German transFigure 1: Alignment between an English phrase and a Japanese word. lation tasks (Luong et al., 2015b; Luong et al., 2015a). However, it is yet to be seen whether NMT is competitive with traditional Statistical Machine Translation (SMT) approaches in translation tasks for structurally distant language pairs such as English-to-Japanese. Figure 1 shows a pair of parallel sentences in English and Japanese. English and Japanese are linguistically distant in many respects; they have different syntactic constructions, and words and phrases are defined in different lexical units. In this example, the Japanese word “緑茶” is aligned with the English words “green” and “tea”, and the English word sequence “a cup of” is aligned with a special symbol “null”, which is not explicitly translated into any Japanese words. One way to solve this mismatch problem is to consider the phrase structure of the English sentence and align the phrase “a cup of green tea” with “緑茶”. In SMT, it is known that incorporating syntactic constituents of the source language into the models improves word alignment (Yamada and Knight, 2001) and translation accuracy (Liu et al., 2006; Neubig and Duh, 2014). However, the existing NMT models do not allow us to perform this kind of alignment. In this paper, we propose a novel attentional NMT model to take advantage of syntactic infor823 mation. Following the phrase structure of a source sentence, we encode the sentence recursively in a bottom-up fashion to produce a vector representation of the sentence and decode it while aligning the input phrases and words with the output. Our experimental results on the WAT’15 English-toJapanese translation task show that our proposed model achieves state-of-the-art translation accuracy. 2 Neural Machine Translation 2.1 Encoder-Decoder Model NMT is an end-to-end approach to data-driven machine translation (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2015). In other words, the NMT models directly estimate the conditional probability p(y|x) given a large collection of source and target sentence pairs (x, y). An NMT model consists of an encoder process and a decoder process, and hence they are often called Encoder-Decoder models. In the Encoder-Decoder models, a sentence is treated as a sequence of words. In the encoder process, the encoder embeds each of the source words x = (x1, x2, · · · , xn) into a d-dimensional vector space. The decoder then outputs a word sequence y = (y1, y2, · · · , ym) in the target language given the information on the source sentence provided by the encoder. Here, n and m are the lengths of the source and target sentences, respectively. RNNs allow one to effectively embed sequential data into the vector space. In the RNN encoder, the i-th hidden unit hi ∈ Rd×1 is calculated given the i-th input xi and the previous hidden unit hi−1 ∈Rd×1, hi = fenc(xi, hi−1), (1) where fenc is a non-linear function, and the initial hidden unit h0 is usually set to zeros. The encoding function fenc is recursively applied until the nth hidden unit hn is obtained. The RNN EncoderDecoder models assume that hn represents a vector of the meaning of the input sequence up to the n-th word. After encoding the whole input sentence into the vector space, we decode it in a similar way. The initial decoder unit s1 is initialized with the input sentence vector (s1 = hn). Given the previous target word and the j-th hidden unit of the decoder, the conditional probability that the j-th target word is generated is calculated as follows: p(yj|y<j, x) = g(sj), (2) where g is a non-linear function. The j-th hidden unit of the decoder is calculated by using another non-linear function fdec as follows: sj = fdec(yj−1, sj−1). (3) We employ Long Short-Term Memory (LSTM) units (Hochreiter and Schmidhuber, 1997; Gers et al., 2000) in place of vanilla RNN units. The tth LSTM unit consists of several gates and two different types of states: a hidden unit ht ∈Rd×1 and a memory cell ct ∈Rd×1, it = σ(W (i)xt + U (i)ht−1 + b(i)), ft = σ(W (f)xt + U (f)ht−1 + b(f)), ot = σ(W (o)xt + U (o)ht−1 + b(o)), ˜ct = tanh(W (˜c)xt + U (˜c)ht−1 + b(˜c)), ct = it ⊙˜ct + ft ⊙ct−1, ht = ot ⊙tanh(ct), (4) where each of it, ft, ot and ˜ct ∈Rd×1 denotes an input gate, a forget gate, an output gate, and a state for updating the memory cell, respectively. W (·) ∈Rd×d and U (·) ∈Rd×d are weight matrices, b(·) ∈Rd×1 is a bias vector, and xt ∈Rd×1 is the word embedding of the t-th input word. σ(·) is the logistic function, and the operator ⊙denotes element-wise multiplication between vectors. 2.2 Attentional Encoder-Decoder Model The NMT models with an attention mechanism (Bahdanau et al., 2015; Luong et al., 2015a) have been proposed to softly align each decoder state with the encoder states. The attention mechanism allows the NMT models to explicitly quantify how much each encoder state contributes to the word prediction at each time step. In the attentional NMT model in Luong et al. (2015a), at the j-th step of the decoder process, the attention score αj(i) between the i-th source hidden unit hi and the j-th target hidden unit sj is calculated as follows: αj(i) = exp(hi · sj) ∑n k=1 exp(hk · sj), (5) where hi · sj is the inner product of hi and sj, which is used to directly calculate the similarity score between hi and sj. The j-th context vector 824 Figure 2: Attentional Encoder-Decoder model. dj is calculated as the summation vector weighted by αj(i): dj = n ∑ i=1 αj(i)hi. (6) To incorporate the attention mechanism into the decoding process, the context vector is used for the the j-th word prediction by putting an additional hidden layer ˜sj: ˜sj = tanh(Wd[sj; dj] + bd), (7) where [sj; dj] ∈R2d×1 is the concatenation of sj and dj, and Wd ∈Rd×2d and bd ∈Rd×1 are a weight matrix and a bias vector, respectively. The model predicts the j-th word by using the softmax function: p(yj|y<j, x) = softmax(Ws˜sj + bs), (8) where Ws ∈R|V |×d and bs ∈R|V |×1 are a weight matrix and a bias vector, respectively. |V | stands for the size of the vocabulary of the target language. Figure 2 shows an example of the NMT model with the attention mechanism. 2.3 Objective Function of NMT Models The objective function to train the NMT models is the sum of the log-likelihoods of the translation pairs in the training data: J(θ) = 1 |D| ∑ (x,y)∈D log p(y|x), (9) where D denotes a set of parallel sentence pairs. The model parameters θ are learned through Stochastic Gradient Descent (SGD). 3 Attentional Tree-to-Sequence Model 3.1 Tree-based Encoder + Sequential Encoder The exsiting NMT models treat a sentence as a sequence of words and neglect the structure of Figure 3: Proposed model: Tree-to-sequence attentional NMT model. a sentence inherent in language. We propose a novel tree-based encoder in order to explicitly take the syntactic structure into consideration in the NMT model. We focus on the phrase structure of a sentence and construct a sentence vector from phrase vectors in a bottom-up fashion. The sentence vector in the tree-based encoder is therefore composed of the structural information rather than the sequential data. Figure 3 shows our proposed model, which we call a tree-to-sequence attentional NMT model. In Head-driven Phrase Structure Grammar (HPSG) (Sag et al., 2003), a sentence is composed of multiple phrase units and represented as a binary tree as shown in Figure 1. Following the structure of the sentence, we construct a tree-based encoder on top of the standard sequential encoder. The k-th parent hidden unit h(phr) k for the k-th phrase is calculated using the left and right child hidden units hl k and hr k as follows: h(phr) k = ftree(hl k, hr k), (10) where ftree is a non-linear function. We construct a tree-based encoder with LSTM units, where each node in the binary tree is represented with an LSTM unit. When initializing the leaf units of the tree-based encoder, we employ the sequential LSTM units described in Section 2.1. Each non-leaf node is also represented with an LSTM unit, and we employ Tree-LSTM (Tai et al., 2015) to calculate the LSTM unit of the parent node which has two child LSTM units. The hidden unit h(phr) k ∈Rd×1 and the memory cell c(phr) k ∈Rd×1 for the k-th parent node are calcu825 lated as follows: ik = σ(U (i) l hl k + U (i) r hr k + b(i)), f l k = σ(U (fl) l hl k + U (fl) r hr k + b(fl)), f r k = σ(U (fr) l hl k + U (fr) r hr k + b(fr)), ok = σ(U (o) l hl k + U (o) r hr k + b(o)), ˜ck = tanh(U (˜c) l hl k + U (˜c) r hr k + b(˜c)), c(phr) k = ik ⊙˜ck + f l k ⊙cl k + f r k ⊙cr k, h(phr) k = ok ⊙tanh(c(phr) k ), (11) where ik, f l k, f r k, oj, ˜cj ∈Rd×1 are an input gate, the forget gates for left and right child units, an output gate, and a state for updating the memory cell, respectively. cl k and cr k are the memory cells for the left and right child units, respectively. U (·) ∈Rd×d denotes a weight matrix, and b(·) ∈Rd×1 represents a bias vector. Our proposed tree-based encoder is a natural extension of the conventional sequential encoder, since Tree-LSTM is a generalization of chainstructured LSTM (Tai et al., 2015). Our encoder differs from the original Tree-LSTM in the calculation of the LSTM units for the leaf nodes. The motivation is to construct the phrase nodes in a context-sensitive way, which, for example, allows the model to compute different representations for multiple occurrences of the same word in a sentence because the sequential LSTMs are calculated in the context of the previous units. This ability contrasts with the original Tree-LSTM, in which the leaves are composed only of the word embeddings without any contextual information. 3.2 Initial Decoder Setting We now have two different sentence vectors: one is from the sequence encoder and the other from the tree-based encoder. As shown in Figure 3, we provide another Tree-LSTM unit which has the final sequential encoder unit (hn) and the tree-based encoder unit (h(phr) root ) as two child units and set it as the initial decoder s1 as follows: s1 = gtree(hn, h(phr) root ), (12) where gtree is the same function as ftree with another set of Tree-LSTM parameters. This initialization allows the decoder to capture information from both the sequential data and phrase structures. Zoph and Knight (2016) proposed a similar method using a Tree-LSTM for initializing the decoder, with which they translate multiple source languages to one target language. When the syntactic parser fails to output a parse tree for a sentence, we encode the sentence with the sequential encoder by setting h(phr) root = 0. Our proposed treebased encoder therefore works with any sentences. 3.3 Attention Mechanism in Our Model We adopt the attention mechanism into our treeto-sequence model in a novel way. Our model gives attention not only to sequential hidden units but also to phrase hidden units. This attention mechanism tells us which words or phrases in the source sentence are important when the model decodes a target word. The j-th context vector dj is composed of the sequential and phrase vectors weighted by the attention score αj(i): dj = n ∑ i=1 αj(i)hi + 2n−1 ∑ i=n+1 αj(i)h(phr) i . (13) Note that a binary tree has n −1 phrase nodes if the tree has n leaves. We set a final decoder ˜sj in the same way as Equation (7). In addition, we adopt the input-feeding method (Luong et al., 2015a) in our model, which is a method for feeding ˜sj−1, the previous unit to predict the word yj−1, into the current target hidden unit sj, sj = fdec(yj−1, [sj−1; ˜sj−1]), (14) where [sj−1; ˜sj−1] is the concatenation of sj−1 and ˜sj−1. The input-feeding approach contributes to the enrichment in the calculation of the decoder, because ˜sj−1 is an informative unit which can be used to predict the output word as well as to be compacted with attentional context vectors. Luong et al. (2015a) showed that the input-feeding approach improves BLEU scores. We also observed the same improvement in our preliminary experiments. 3.4 Sampling-Based Approximation to the NMT Models The biggest computational bottleneck of training the NMT models is in the calculation of the softmax layer described in Equation (8), because its computational cost increases linearly with the size of the vocabulary. The speedup technique with GPUs has proven useful for sequence-based NMT models (Sutskever et al., 2014; Luong et al., 826 2015a) but it is not easily applicable when dealing with tree-structured data. In order to reduce the training cost of the NMT models at the softmax layer, we employ BlackOut (Ji et al., 2016), a sampling-based approximation method. BlackOut has been shown to be effective in RNN Language Models (RNNLMs) and allows a model to run reasonably fast even with a million word vocabulary with CPUs. At each word prediction step in the training, BlackOut estimates the conditional probability in Equation (2) for the target word and K negative samples using a weighted softmax function. The negative samples are drawn from the unigram distribution raised to the power β ∈ [0, 1] (Mikolov et al., 2013). The unigram distribution is estimated using the training data and β is a hyperparameter. BlackOut is closely related to Noise Contrastive Estimation (NCE) (Gutmann and Hyv¨arinen, 2012) and achieves better perplexity than the original softmax and NCE in RNNLMs. The advantages of Blackout over the other methods are discussed in Ji et al. (2016). Note that BlackOut can be used as the original softmax once the training is finished. 4 Experiments 4.1 Training Data We applied the proposed model to the English-toJapanese translation dataset of the ASPEC corpus given in WAT’15.1 Following Zhu (2015), we extracted the first 1.5 million translation pairs from the training data. To obtain the phrase structures of the source sentences, i.e., English, we used the probabilistic HPSG parser Enju (Miyao and Tsujii, 2008). We used Enju only to obtain a binary phrase structure for each sentence and did not use any HPSG specific information. For the target language, i.e., Japanese, we used KyTea (Neubig et al., 2011), a Japanese segmentation tool, and performed the pre-processing steps recommended in WAT’15.2 We then filtered out the translation pairs whose sentence lengths are longer than 50 and whose source sentences are not parsed successfully. Table 1 shows the details of the datasets used in our experiments. We carried out two experiments on a small training dataset to investigate 1http://orchid.kuee.kyoto-u.ac.jp/WAT/ WAT2015/index.html 2http://orchid.kuee.kyoto-u.ac.jp/WAT/ WAT2015/baseline/dataPreparationJE.html Sentences Parsed successfully Train 1,346,946 1,346,946 Development 1,790 1,789 Test 1,812 1,811 Table 1: Dataset in ASPEC corpus. Train (small) Train (large) sentence pairs 100,000 1,346,946 |V | in English 25,478 87,796 |V | in Japanese 23,532 65,680 Table 2: Training dataset and the vocabulary sizes. the effectiveness of our proposed model and on a large training dataset to compare our proposed methods with the other systems. The vocabulary consists of words observed in the training data more than or equal to N times. We set N = 2 for the small training dataset and N = 5 for the large training dataset. The out-ofvocabulary words are mapped to the special token “unk”. We added another special symbol “eos” for both languages and inserted it at the end of all the sentences. Table 2 shows the details of each training dataset and its corresponding vocabulary size. 4.2 Training Details The biases, softmax weights, and BlackOut weights are initialized with zeros. The hyperparameter β of BlackOut is set to 0.4 as recommended by Ji et al. (2016). Following J´ozefowicz et al. (2015), we initialize the forget gate biases of LSTM and Tree-LSTM with 1.0. The remaining model parameters in the NMT models in our experiments are uniformly initialized in [−0.1, 0.1]. The model parameters are optimized by plain SGD with the mini-batch size of 128. The initial learning rate of SGD is 1.0. We halve the learning rate when the development loss becomes worse. Gradient norms are clipped to 3.0 to avoid exploding gradient problems (Pascanu et al., 2012). Small Training Dataset We conduct experiments with our proposed model and the sequential attentional NMT model with the input-feeding approach. Each model has 256-dimensional hidden units and word embeddings. The number of negative samples K of BlackOut is set to 500 or 2000. 827 Large Training Dataset Our proposed model has 512-dimensional word embeddings and ddimensional hidden units (d ∈{512, 768, 1024}). K is set to 2500. Our code3 is implemented in C++ using the Eigen library,4 a template library for linear algebra, and we run all of the experiments on multicore CPUs.5 It takes about a week to train a model on the large training dataset with d = 512. 4.3 Decoding process We use beam search to decode a target sentence for an input sentence x and calculate the sum of the log-likelihoods of the target sentence y = (y1, · · · , ym) as the beam score: score(x, y) = m ∑ j=1 log p(yj|y<j, x). (15) Decoding in the NMT models is a generative process and depends on the target language model given a source sentence. The score becomes smaller as the target sentence becomes longer, and thus the simple beam search does not work well when decoding a long sentence (Cho et al., 2014a; Pouget-Abadie et al., 2014). In our preliminary experiments, the beam search with the length normalization in Cho et al. (2014a) was not effective in English-to-Japanese translation. The method in Pouget-Abadie et al. (2014) needs to estimate the conditional probability p(x|y) using another NMT model and thus is not suitable for our work. In this paper, we use statistics on sentence lengths in beam search. Assuming that the length of a target sentence correlates with the length of a source sentence, we redefine the score of each candidate as follows: score(x, y) = Lx,y + m ∑ j=1 log p(yj|y<j, x),(16) Lx,y = log p(len(y)|len(x)), (17) where Lx,y is the penalty for the conditional probability of the target sentence length len(y) given the source sentence length len(x). It allows the model to decode a sentence by considering the length of the target sentence. In our experiments, we computed the conditional probability 3https://github.com/tempra28/tree2seq 4http://eigen.tuxfamily.org/index.php 516 threads on Intel(R) Xeon(R) CPU E5-2667 v3 @ 3.20GHz p(len(y)|len(x)) in advance following the statistics collected in the first one million pairs of the training dataset. We allow the decoder to generate up to 100 words. 4.4 Evaluation We evaluated the models by two automatic evaluation metrics, RIBES (Isozaki et al., 2010) and BLEU (Papineni et al., 2002) following WAT’15. We used the KyTea-based evaluation script for the translation results.6 The RIBES score is a metric based on rank correlation coefficients with word precision, and the BLEU score is based on n-gram word precision and a Brevity Penalty (BP) for outputs shorter than the references. RIBES is known to have stronger correlation with human judgements than BLEU in translation between English and Japanese as discussed in Isozaki et al. (2010). 5 Results and Discussion 5.1 Small Training Dataset Table 3 shows the perplexity, BLEU, RIBES, and the training time on the development data with the Attentional NMT (ANMT) models trained on the small dataset. We conducted the experiments with our proposed method using BlackOut and softmax. We decoded a translation by our proposed beam search with a beam size of 20. As shown in Table 3, the results of our proposed model with BlackOut improve as the number of negative samples K increases. Although the result of softmax is better than those of BlackOut (K = 500, 2000), the training time of softmax per epoch is about three times longer than that of BlackOut even with the small dataset. As to the results of the ANMT model, reversing the word order in the input sentence decreases the scores in English-to-Japanese translation, which contrasts with the results of other language pairs reported in previous work (Sutskever et al., 2014; Luong et al., 2015a). By taking syntactic information into consideration, our proposed model improves the scores, compared to the sequential attention-based approach. We found that better perplexity does not always lead to better translation scores with BlackOut as shown in Table 3. One of the possible reasons is that BlackOut distorts the target word distribution 6http://lotus.kuee.kyoto-u.ac.jp/WAT/ evaluation/automatic_evaluation_systems/ automaticEvaluationJA.html 828 K Perplexity RIBES BLEU Time/epoch (min.) Proposed model 500 19.6 71.8 20.0 55 Proposed model 2000 21.0 72.6 20.5 70 Proposed model (Softmax) — 17.9 73.2 21.8 180 ANMT (Luong et al., 2015a) 500 21.6 70.7 18.5 45 + reverse input 500 22.6 69.8 17.7 45 ANMT (Luong et al., 2015a) 2000 23.1 71.5 19.4 60 + reverse input 2000 26.1 69.5 17.5 60 Table 3: Evaluation results on the development data using the small training data. The training time per epoch is also shown, and K is the number of negative samples in BlackOut. Beam size RIBES BLEU (BP) Simple BS 6 72.3 20.0 (90.1) 20 72.3 19.5 (85.1) Proposed BS 20 72.6 20.5 (91.7) Table 4: Effects of the Beam Search (BS) on the development data. by the modified unigram-based negative sampling where frequent words can be treated as the negative samples multiple times at each training step. Effects of the proposed beam search Table 4 shows the results on the development data of proposed method with BlackOut (K = 2000) by the simple beam search and our proposed beam search. The beam size is set to 6 or 20 in the simple beam search, and to 20 in our proposed search. We can see that our proposed search outperforms the simple beam search in both scores. Unlike RIBES, the BLEU score is sensitive to the beam size and becomes lower as the beam size increases. We found that the BP had a relatively large impact on the BLEU score in the simple beam search as the beam size increased. Our search method works better than the simple beam search by keeping long sentences in the candidates with a large beam size. Effects of the sequential LSTM units We also investigated the effects of the sequential LSTMs at the leaf nodes in our proposed tree-based encoder. Table 5 shows the result on the development data of our proposed encoder and that of an attentional tree-based encoder without sequential LSTMs with BlackOut (K = 2000).7 The results show that our proposed encoder considerably out7For this evaluation, we used the 1,789 sentences that were successfully parsed by Enju because the encoder without sequential LSTMs always requires a parse tree. RIBES BLEU Without sequential LSTMs 69.4 19.5 With sequential LSTMs 72.3 20.0 Table 5: Effects of the sequential LSTMs in our proposed tree-based encoder on the development data. performs the encoder without sequential LSTMs, suggesting that the sequential LSTMs at the leaf nodes contribute to the context-aware construction of the phrase representations in the tree. 5.2 Large Training Dataset Table 6 shows the experimental results of RIBES and BLEU scores achieved by the trained models on the large dataset. We decoded the target sentences by our proposed beam search with the beam size of 20.8 The results of the other systems are the ones reported in Nakazawa et al. (2015). All of our proposed models show similar performance regardless of the value of d. Our ensemble model is composed of the three models with d = 512, 768, and 1024, and it shows the best RIBES score among all systems.9 As for the time required for training, our implementation needs about one day to perform one epoch on the large training dataset with d = 512. It would take about 11 days without using the BlackOut sampling. Comparison with the NMT models The model of Zhu (2015) is an ANMT model (Bahdanau et al., 2015) with a bi-directional LSTM encoder, and uses 1024-dimensional hidden units and 10008We found two sentences which ends without eos with d = 512, and then we decoded it again with the beam size of 1000 following Zhu (2015). 9Our ensemble model yields a METEOR (Denkowski and Lavie, 2014) score of 53.6 with language option “-l other”. 829 Model RIBES BLEU Proposed model (d = 512) 81.46 34.36 Proposed model (d = 768) 81.89 34.78 Proposed model (d = 1024) 81.58 34.87 Ensemble of the above three models 82.45 36.95 ANMT with LSTMs (Zhu, 2015) 79.70 32.19 + Ensemble, unk replacement 80.27 34.19 + System combination, 80.91 36.21 3 pre-reordered ensembles ANMT with GRUs (Lee et al., 2015) 81.15 35.75 + character-based decoding, Begin/Inside representation PB baseline 69.19 29.80 HPB baseline 74.70 32.56 T2S baseline 75.80 33.44 T2S model (Neubig and Duh, 2014) 79.65 36.58 + ANMT Rerank (Neubig et al., 2015) 81.38 38.17 Table 6: Evaluation results on the test data. dimensional word embeddings. The model of Lee et al. (2015) is also an ANMT model with a bidirectional Gated Recurrent Unit (GRU) encoder, and uses 1000-dimensional hidden units and 200dimensional word embeddings. Both models are sequential ANMT models. Our single proposed model with d = 512 outperforms the best result of Zhu (2015)’s end-to-end NMT model with ensemble and unknown replacement by +1.19 RIBES and by +0.17 BLEU scores. Our ensemble model shows better performance, in both RIBES and BLEU scores, than that of Zhu (2015)’s best system which is a hybrid of the ANMT and SMT models by +1.54 RIBES and by +0.74 BLEU scores and Lee et al. (2015)’s ANMT system with special character-based decoding by +1.30 RIBES and +1.20 BLEU scores. Comparison with the SMT models PB, HPB and T2S are the baseline SMT systems in WAT’15: a phrase-based model, a hierarchical phrase-based model, and a tree-to-string model, respectively (Nakazawa et al., 2015). The best model in WAT’15 is Neubig et al. (2015)’s treeto-string SMT model enhanced with reranking by ANMT using a bi-directional LSTM encoder. Our proposed end-to-end NMT model compares favorably with Neubig et al. (2015). 5.3 Qualitative Analysis We illustrate the translations of test data by our model with d = 512 and several attentional relations when decoding a sentence. In Figures 4 and 5, an English sentence represented as a binary tree is translated into Japanese, and several attentional relations between English words or phrases and Figure 4: Translation example of a short sentence and the attentional relations by our proposed model. Japanese word are shown with the highest attention score α. The additional attentional relations are also illustrated for comparison. We can see the target words softly aligned with source words and phrases. In Figure 4, the Japanese word “液晶” means “liquid crystal”, and it has a high attention score (α = 0.41) with the English phrase “liquid crystal for active matrix”. This is because the j-th target hidden unit sj has the contextual information about the previous words y<j including “活性マ トリックスの” (“for active matrix” in English). The Japanese word “セル” is softly aligned with the phrase “the cells” with the highest attention score (α = 0.35). In Japanese, there is no definite article like “the” in English, and it is usually aligned with null described as Section 1. In Figure 5, in the case of the Japanese word “示” (“showed” in English), the attention score with the English phrase “showed excellent performance” (α = 0.25) is higher than that with the English word “showed” (α = 0.01). The Japanese word “の” (“of” in English) is softly aligned with the phrase “of Si dot MOS capacitor” with the highest attention score (α = 0.30). It is because our attention mechanism takes each previous context of the Japanese phrases “優れた性能” (“excellent performance” in English) and “Siドット MOSコンデンサ” (“Si dot MOS capacitor” in English) into account and softly aligned the target words with the whole phrase when translating the English verb “showed” and the preposition “of”. Our proposed model can thus flexibly learn the attentional relations between English and Japanese. We observed that our model translated the word “active” into “活性”, a synonym of the reference word “アクティブ”. We also found similar examples in other sentences, where our model outputs 830 Figure 5: Translation example of a long sentence and the attentional relations by our proposed model. synonyms of the reference words, e.g. “女” and “ 女性” (“female” in English) and “NASA” and “航 空宇宙局” (“National Aeronautics and Space Administration” in English). These translations are penalized in terms of BLEU scores, but they do not necessarily mean that the translations were wrong. This point may be supported by the fact that the NMT models were highly evaluated in WAT’15 by crowd sourcing (Nakazawa et al., 2015). 6 Related Work Kalchbrenner and Blunsom (2013) were the first to propose an end-to-end NMT model using Convolutional Neural Networks (CNNs) as the source encoder and using RNNs as the target decoder. The Encoder-Decoder model can be seen as an extension of their model, and it replaces the CNNs with RNNs using GRUs (Cho et al., 2014b) or LSTMs (Sutskever et al., 2014). Sutskever et al. (2014) have shown that making the input sequences reversed is effective in a French-to-English translation task, and the technique has also proven effective in translation tasks between other European language pairs (Luong et al., 2015a). All of the NMT models mentioned above are based on sequential encoders. To incorporate structural information into the NMT models, Cho et al. (2014a) proposed to jointly learn structures inherent in source-side languages but did not report improvement of translation performance. These studies motivated us to investigate the role of syntactic structures explicitly given by existing syntactic parsers in the NMT models. The attention mechanism (Bahdanau et al., 2015) has promoted NMT onto the next stage. It enables the NMT models to translate while aligning the target with the source. Luong et al. (2015a) refined the attention model so that it can dynamically focus on local windows rather than the entire sentence. They also proposed a more effective attentional path in the calculation of ANMT models. Subsequently, several ANMT models have been proposed (Cheng et al., 2016; Cohn et al., 2016); however, each model is based on the existing sequential attentional models and does not focus on a syntactic structure of languages. 7 Conclusion In this paper, we propose a novel syntactic approach that extends attentional NMT models. We focus on the phrase structure of the input sentence and build a tree-based encoder following the parsed tree. Our proposed tree-based encoder is a natural extension of the sequential encoder model, where the leaf units of the tree-LSTM in the encoder can work together with the original sequential LSTM encoder. Moreover, the attention mechanism allows the tree-based encoder to align not only the input words but also input phrases with the output words. Experimental results on the WAT’15 English-to-Japanese translation dataset demonstrate that our proposed model achieves the best RIBES score and outperforms the sequential attentional NMT model. Acknowledgments We thank the anonymous reviewers for their constructive comments and suggestions. This work was supported by CREST, JST, and JSPS KAKENHI Grant Number 15J12597. 831 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In Proceedings of the 3rd International Conference on Learning Representations. Yong Cheng, Shiqi Shen, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Agreement-based Joint Training for Bidirectional Attention-based Neural Machine Translation. In Proceedings of the 25th International Joint Conference on Artificial Intelligence. to appear. KyungHyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014a. On the Properties of Neural Machine Translation: EncoderDecoder Approaches. In Proceedings of Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation (SSST-8). Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014b. Learning Phrase Representations using RNN Encoder– Decoder for Statistical Machine Translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1724– 1734. Trevor Cohn, Cong Duy Vu Hoang, Ekaterina Vymolova, Kaisheng Yao, Chris Dyer, and Gholamreza Haffari. 2016. Incorporating Structural Alignment Biases into an Attentional Neural Translation Model. In Proceedings of the 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. to appear. Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics 2014 Workshop on Statistical Machine Translation. Felix A. Gers, J¨urgen Schmidhuber, and Fred A. Cummins. 2000. Learning to Forget: Continual Prediction with LSTM. Neural Computation, 12(10):2451–2471. Michael U. Gutmann and Aapo Hyv¨arinen. 2012. Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics. Journal of Machine Learning Research, 13(1):307–361. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long Short-Term Memory. Neural Computation, 9(8):1735–1780. Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhito Sudoh, and Hajime Tsukada. 2010. Automatic Evaluation of Translation Quality for Distant Language Pairs. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 944–952. Shihao Ji, S. V. N. Vishwanathan, Nadathur Satish, Michael J. Anderson, and Pradeep Dubey. 2016. BlackOut: Speeding up Recurrent Neural Network Language Models With Very Large Vocabularies. In Proceedings of the 4th International Conference on Learning Representations. Rafal J´ozefowicz, Wojciech Zaremba, and Ilya Sutskever. 2015. An Empirical Exploration of Recurrent Network Architectures. In Proceedings of the 32nd International Conference on Machine Learning, volume 37, pages 2342–2350. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1700–1709. Hyoung-Gyu Lee, JaeSong Lee, Jun-Seok Kim, and Chang-Ki Lee. 2015. NAVER Machine Translation System for WAT 2015. In Proceedings of the 2nd Workshop on Asian Translation (WAT2015), pages 69–73. Yang Liu, Qun Liu, and Shouxun Lin. 2006. Treeto-string alignment template for statistical machine translation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 609–616. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015a. Effective Approaches to Attentionbased Neural Machine Translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421. Thang Luong, Ilya Sutskever, Quoc Le, Oriol Vinyals, and Wojciech Zaremba. 2015b. Addressing the Rare Word Problem in Neural Machine Translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 11–19. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26, pages 3111–3119. Yusuke Miyao and Jun’ichi Tsujii. 2008. Feature Forest Models for Probabilistic HPSG Parsing. Computational Linguistics, 34(1):35–80. Toshiaki Nakazawa, Hideya Mino, Isao Goto, Graham Neubig, Sadao Kurohashi, and Eiichiro Sumita. 2015. Overview of the 2nd Workshop on Asian Translation. In Proceedings of the 2nd Workshop on Asian Translation (WAT2015), pages 1–28. 832 Graham Neubig and Kevin Duh. 2014. On the elements of an accurate tree-to-string machine translation system. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 143–149. Graham Neubig, Yosuke Nakata, and Shinsuke Mori. 2011. Pointwise Prediction for Robust, Adaptable Japanese Morphological Analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 529–533. Graham Neubig, Makoto Morishita, and Satoshi Nakamura. 2015. Neural Reranking Improves Subjective Quality of Machine Translation: NAIST at WAT2015. In Proceedings of the 2nd Workshop on Asian Translation (WAT2015), pages 35–41. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: A Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 311–318. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2012. Understanding the exploding gradient problem. arXiv: 1211.5063. Jean Pouget-Abadie, Dzmitry Bahdanau, Bart van Merrienboer, Kyunghyun Cho, and Yoshua Bengio. 2014. Overcoming the curse of sentence length for neural machine translation using automatic segmentation. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 78–85. Ivan A. Sag, Thomas Wasow, and Emily Bender. 2003. Syntactic Theory: A Formal Introduction. Center for the Study of Language and Information, Stanford, 2nd edition. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to Sequence Learning with Neural Networks. In Advances in Neural Information Processing Systems 27, pages 3104–3112. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1556–1566. Kenji Yamada and Kevin Knight. 2001. A syntaxbased statistical translation model. In Proceedings of the 39th Annual Meeting on Association for Computational Linguistics, pages 523–530. Zhongyuan Zhu. 2015. Evaluating Neural Machine Translation in English-Japanese Task. In Proceedings of the 2nd Workshop on Asian Translation (WAT2015), pages 61–68. Barret Zoph and Kevin Knight. 2016. Multi-Source Neural Translation. In Proceedings of the 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. to appear. 833
2016
78
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 834–842, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Coordination Annotation Extension in the Penn Tree Bank Jessica Ficler Computer Science Department Bar-Ilan University Israel [email protected] Yoav Goldberg Computer Science Department Bar-Ilan University Israel [email protected] Abstract Coordination is an important and common syntactic construction which is not handled well by state of the art parsers. Coordinations in the Penn Treebank are missing internal structure in many cases, do not include explicit marking of the conjuncts and contain various errors and inconsistencies. In this work, we initiated manual annotation process for solving these issues. We identify the different elements in a coordination phrase and label each element with its function. We add phrase boundaries when these are missing, unify inconsistencies, and fix errors. The outcome is an extension of the PTB that includes consistent and detailed structures for coordinations. We make the coordination annotation publicly available, in hope that they will facilitate further research into coordination disambiguation. 1 1 Introduction The Penn Treebank (PTB) (Marcus et al., 1993) is perhaps the most commonly used resource for training and evaluating syntax-based natural language processing systems. Despite its widespread adoption and undisputed usefulness, some of the annotations in PTB are not optimal, and could be improved. The work of Vadas and Curran (2007) identified and addressed one such annotation deficiency – the lack of internal structure in base NPs. In this work we focus on the annotation of coordinating conjunctions. Coordinating conjunctions (e.g. “John and Mary”, “to be or not to be”) are a very common syntactic construction, appearing in 38.8% of the 1The data is available in: https://github.com/Jess1ca/CoordinationExtPTB sentences in the PTB. As noted by Hogan (2007), coordination annotation in the PTB are not consistent, include errors, and lack internal structure in many cases (Hara et al., 2009; Hogan, 2007; Shimbo and Hara, 2007). Another issue is that PTB does not mark whether a punctuation is part of the coordination or not. This was resolved by Maier et al. (2012) which annotated punctuation in the PTB . These errors, inconsistencies, and in particular the lack of internal structural annotation turned researchers that were interested specifically in coordination disambiguation away from the PTB and towards much smaller, domain specific efforts such as the Genia Treebank (Kim et al., 2003) of biomedical texts (Hara et al., 2009; Shimbo and Hara, 2007). In addition, we also find that the PTB annotation make it hard, and often impossible, to correctly identify the elements that are being coordinated, and tell them apart from other elements that may appear in a coordination construction. While most of the coordination phrases are simple and include only conjuncts and a coordinator, many cases include additional elements with other syntactic functions , such as markers (e.g. “Both Alice and Bob”), connectives (e.g. “Fast and thus useful”) and shared elements (e.g. “Bob’s principles and opinions”) (Huddleston et al., 2002). The PTB annotations do not differentiate between these elements. For example, consider the following coordination phrases which begin with a PP: (a) “[in the open market]PP , [in private transactions] or [otherwise].” (b) “[According to Fred Demler]PP , [Highland Valley has already started operating] and [Cananea is expected to do so soon].” Even though the first element is a conjunct only in (a), both phrases are represented with the 834 marked elements as siblings. Our goal in this work is to fix these deficiencies. We aim for an annotation in which: • All coordination phrases are explicitly marked and are differentiated from noncoordination structures. • Each element in the coordination structure is explicitly marked with its role within the coordination structure. • Similar structures are assigned a consistent annotation. We also aim to fix existing errors involving coordination, so that the resulting corpus includes as few errors as possible. On top of these objectives, we also like to stay as close as possible to the original PTB structures. We identify the different elements that can participate in a coordination phrase, and enrich the PTB by labeling each element with its function. We add phrase boundaries when these are missing, unify inconsistencies, and fix errors. This is done based on a combination of automatic processing and manual annotation. The result is an extension of the PTB trees that include consistent and more detailed coordination structures. We release our annotation as a diff over the PTB. The extended coordination annotation fills an important gap in wide-scale syntactic annotation of English syntax, and is a necessary first step towards research on improving coordination disambiguation. 2 Background Coordination is a very common syntactic structure in which two or more elements are linked. An example for a coordination structure is “Alice and Bob traveled to Mars”. The elements (Alice and Bob) are called the conjuncts and and is called the coordinator. Other coordinator words include or, nor and but. Any grammatical function can be coordinated. For examples: “[relatively active]ADJP but [unfocused]ADJP ” ; “[in]IN and [out]IN the market”. While it is common for the conjuncts to be of the same syntactic category, coordination of elements with different syntactic categories are also possible (e.g. “Alice will visit Earth [tomorrow]NP or [in the next decade]PP ”). Less common coordinations are those with nonconstituent elements. These are cases such as “equal to or higher than”, and coordinations from the type of Argument-Cluster (e.g. “Alice has visited 4 planets in 2014 and 3 more since then”) and Gapping (e.g. “Bob lives in Earth and Alice in Saturn”) (Dowty, 1988). 2.1 Elements of Coordination Structure While the canonical coordination cases involve conjuncts linked with a coordinator, other elements may also take part in the coordination structure: markers, connective adjectives, parentheticals, and shared arguments and modifiers. These elements are often part of the same syntactic phrase as the conjuncts, and should be taken into account in coordination structure annotation. We elaborate on the possible elements in a coordination phrase: Shared modifiers Modifiers that are related to each of the conjuncts in the phrase. For instance, in “Venus’s density and mean temperature are very high”, Venus’s is a shared modifier of the conjuncts “density” and “mean temperature” 2. Shared arguments Phrases that function as arguments for each of the conjuncts. For instance, in “Bob cleaned and refueled the spaceship.”, “the spaceship” and “Bob” are arguments of the conjuncts cleaned and refuel 3. Markers Determiners such as both and either that may appear at the beginning of the coordination phrase (Huddleston et al., 2002). As for example in “Both Alice and Bob are Aliens” and “Either Alice or Bob will drive the spaceship”. In addition to the cases documented by Huddleston et al, our annotation of the Penn Treebank data reveals additional markers. For examples: “between 15 million and 20 million ; “first and second respectively”. Connective adjectives Adverbs such as so, yet, however, then, etc. that commonly appear right after the coordinator (Huddleston et al., 2002). For instance “We plan to meet in the middle of the way and then continue together”. Parenthetical Parenthetical remarks that may appear between the conjuncts. For examples: 2Here, the NP containing the coordination (“Venus’s density and mean temperature”) is itself an argument of “are very high”. 3While both are shared arguments, standard syntactic analyses consider the subject (Bob) to be outside the VP containing the coordination, and the direct object (the spaceship) as a part of the VP. 835 “The vacation packages include hotel accommodations and, in some cases, tours”; “Some shows just don’t impress, he says, and this is one of them”. Consider the coordinated PP phrase in “Alice traveled [both inside and outside the galaxy]PP .” Here, inside and outside are the conjuncts, both is a marker, and “the galaxy” is a shared argument. A good representation of the coordination structure would allow us to identify the different elements and their associated functions. As we show below, it is often not possible to reliably extract such information from the existing PTB annotation scheme. 3 Coordinations in the Penn Tree Bank We now turn to describe how coordination is handled in the PTB, focusing on the parts where we find the annotation scheme to be deficient. There is no explicit annotation for coordination phrases Some coordinators do not introduce a coordination structure. For example, the coordinator “and” can be a discourse marker connecting two sentences (e.g. “And they will even serve it themselves”), or introduce a parenthetical (e.g. “The Wall Street Journal is an excellent publication that I enjoy reading (and must read) daily”). These are not explicitly differentiate in the PTB from the case where “and” connects between at least two elements (e.g. “loyalty and trust”). NPs without internal structure The PTB guidelines (Bies et al., 1995) avoid giving any structure to NPs with nominal modifiers. Following this, 4759 NPs that include coordination were left flat, i.e. all the words in the phrase are at the same level. For example (NP (NNP chairman) (CC and) (NP chief executive officer)) which is annotated in the PTB as: [1] NP NN chairman CC and JJ chief NN executive NN officer It is impossible to reliably extract conjunct boundaries from such structures. Although work has been done for giving internal structures for flat NPs (Vadas and Curran, 2007), only 48% of the flat NP coordinators that include more than two nouns were given an internal structure, leaving 1744 cases of flat NPs with ambiguous conjunct boundaries. Coordination parts are not categorized Coordination phrases may include markers, shared modifiers, shared arguments, connective adjectives and parentheticals. Such elements are annotated on the same level as the conjuncts4. This is true not only in the case of flat NPs but also in cases where the coordination phrase elements do have internal structures. For examples: • The Both marker in (NP (DT both) (NP the self) (CC and) (NP the audience)) • The parenthetical maybe in (NP (NP predictive tests) (CC and) (PRN , maybe ,) (NP new therapies)) • The shared-modifier “the economy’s” in (NP (NP the economy’s) (NNS ups) (CC and) (NNS downs)) Automatic categorization of the phrases elements is not trivial. Consider the coordination phrase “a phone, a job, and even into a school”, which is annotated in the PTB where the NPs “a phone” and “a job”, the ADVP “even” and the PP “into a school” are siblings. A human reader can easily deduce that the conjuncts are “a phone”, “a job” and “into a school”, while “even” is a connective. However, for an automatic analyzer, this structure is ambiguous: NPs can be conjoined with ADVPs as well as PPs, and a coordination phrase of the form NP NP CC ADVP PP has at least two possible interpretations: (1) Coord Coord CC Conn Coord (2) Coord Coord CC Coord Shared. Inconsistency in shared elements and markers level The PTB guidelines allows inconsistency in the case of shared ADVP pre-modifiers of VPs (e.g. “deliberately chewed and winked”). The pre-modifier may be annotated in the same level of the VP ((ADVP deliberately) (VP chewed and winked)) or inside it (VP (ADVP deliberately) chewed and winked)). In addition to this documented inconsistency, we also found markers that are inconsistently annotated in and outside the coordination phrase, such as respectively which is 4shared arguments may appear in the PTB outside the coordination phrase. For example He is an argument for bought and for sold in ((He) ((bought) (and) (sold) (stocks))). 836 tagged as sibling to the conjuncts in (NP (NP Feb. 1 1990) (CC and) (NP May. 3 1990), (ADVP respectively)) and as sibling to the conjuncts parent in (VP (VBD were) (NP 7.37% and 7.42%), (ADVP respectively)). Inconsistency in comparative quantity coordination Quantity phrases with a second conjunct of more, less, so, two and up are inconsistently tagged. Consider the following sentences: “[50] [or] [so] projects are locked up”, “Street estimates of [$ 1] [or so] are low”. The coordination phrase is similar in both the sentences but is annotated differently. Various errors The PTB coordination structures include errors. Some are related to flat coordinations (Hogan, 2007). In addition, we found cases where a conjunct is not annotated as a complete phrase, but with two sequenced phrases. For instance, the conjuncts in the sentence “But less than two years later, the LDP started to crumble, and dissent rose to unprecedented heights” are “the LDP started to crumble” and “dissent rose to unprecedented heights”. In the PTB, this sentence is annotated where the first conjunct is splitted into two phrases: “[the LDP] [started to crumble], and [dissent rose to unprecedented heights]”. 4 Extended Coordination Annotation The PTB annotation of coordinations makes it difficult to identify phrases containing coordination and to distinguish the conjuncts from the other parts of a coordination phrase. In addition it contains various errors, inconsistencies and coordination phrases with no internal structure. We propose an improved representation which aims to solve these problems, while keeping the deviation from the original PTB trees to a minimum. 4.1 Explicit Function Marking We add function labels to non-terminal symbols of nodes participating in coordination structures. The function labels are indicated by appending a -XXX suffix to the non-terminal symbol, where the XXX mark the function of the node. Phrases containing a coordination are marked with a CCP label. Nodes directly dominated by a CCP node are assigned one of the following labels according to their function: CC for coordinators, COORD for conjuncts, MARK for markers5, CONN for connectives and parentheticals, and SHARED for shared modifiers/arguments. For shared elements, we deal only with those that are inside the coordination phrase. We do not assign function labels to punctuation symbols and empty elements. For example, our annotation for the sentence “...he observed among his fellow students and, more important, among his officers and instructors ...” is: PP CCP PP COORD among his fellow students CC CC and ADVP CONN more important PP COORD IN among NP CCP PRP SHARED his NNS COORD officers CC CC and NNS COORD instructors Table 1 summarizes the number of labels for each type in the enhanced version of the Penn Treebank. Function label # CC 24,572 CCP 24,450 COORD 52,512 SHARED 3372 CONN 526 MARK 522 Table 1: The number of labels that were added to the Penn Treebank by type. 4.2 Changes in Tree Structure As a guiding principle, we try not to change the structure of the original PTB trees. The exceptions to this rule are cases where the structure is changed to provide internal structure when it is missing, as well as when fixing systematic inconsistencies and occasional errors. 1. In flat coordination structures which include elements with more than one word, we add brackets to delimit the element spans. For instance, in the flat NP in [1] we add brackets to delimit the conjunct “chief executive officer”. The full phrase 5both, either, between, first, neither, not, not only, respectively and together 837 structure is: (NP-CCP (NN-COORD chairman) (CC-CC and) (NP-COORD chief executive officer)). 2. Comparative quantity phrases (“5 dollars or less”) are inconsistently analyzed in the PTB. When needed, we add an extra bracket with a QP label so they are consistently analyzed as “5 dollars [or less]QP ”. Note that we do not consider these cases as coordination phrases. 3. We add brackets to delimit the coordination phrase in flat cases that include coordination between modifiers while the head is annotated in the same phrase: NP DT The NN broadcast CC and VBG publishing NN company ⇓ NP DT The UCP-CCP NN-COORD broadcast CC-CC and VBG-COORD publishing NN company company, which is the head of the phrase, is originally annotated at the same level as the conjuncts broadcast and publishing, and the determiner the. In such cases, the determiner and modifiers are related to the head which is not part of the coordination phrase, requiring the extra bracketing level to delimit the coordination. This is in contrast to the case of coordination between verbs (e.g “Bob (VP cleaned and refueled the spaceship)”), where the non coordinated elements (“the spaceship”) are shared. 4. When a conjunct is split into two phrases or more due to an error, we add extra brackets to delimit the conjunct as a complete phrase: S NP Management’s total VP could be reduced CC and S NP the public VP could get more ⇓ Type # (1) Flat structures 1872 (2) Comparative quantity phrases 52 (3) Coordination between modifiers 1264 (4) Coordination with errors 213 (5) ADVP inconsistency 206 Table 2: The number of subtrees in the Penn Treebank that were changed in our annotation by type. S-CCP S-COORD NP Management’s total VP could be reduced CC-CC and S-COORD NP the public VP could get more 5. We consolidate cases where markers and ADVP pre-modifiers are annotated outside the coordination phrase, so they are consistently annotated inside the coordination phrase. Table 2 summarizes the numbers and types of subtrees that receive a new tree structure in the enhanced version of the Penn Treebank. 5 The Annotation Process Some of the changes can be done automatically, while other require human judgment. Our annotation procedure combines automatic rules and manual annotation that was performed by a dedicated annotator that was trained for this purpose. 5.1 Explicit marking of coordination phrases We automatically annotate coordination phrases with a CCP function label. We consider a phrase as coordination phrase if it includes a coordinator and at least one phrase on each side of the coordinator, unlike coordinators that function as discourse markers or introduce parentheticals, which appear as the first element in the phrase. 5.2 Assigning internal structure to flat coordinations Flat coordinations that include only a coordinator and two conjuncts (e.g. (NP (NNP Poland) (CC and) (NNP Hungary))) are trivial and are left with the same structure. For the rest of the flat coordinations (3498 cases), we manually annotated the elements spans. For example, given the flat 838 NP: “[General]NNP [Electric]NNP [Co.]NNP [executives]NNS [and]CC [lawyers]NNS”. The annotator is expected to provide the analysis: “[General Electric Co.] [executives] [and] [lawyers]”. We then add brackets around multitoken elements (e.g. “General Electric Co.”), and set the label according the syntactic structure. The annotation was done while ignoring inner structures that were given in the NP-Bracketing extension of Vadas and Curran (2007). We compare agreement with their annotations in the next section. To handle cases such as in 4.2(3), where the coordination is between modifiers of a head which is annotated in the PTB on the same level of the conjuncts, we first identify potential candidate phrases of this type by looking for coordination phrases where the last element was not tagged by the annotator as a conjunct. Out of this set, we remove cases where we can reliably identify the non-conjunct element as a marker. For the rest of the cases, we distinguish between NP phrases and non-NP phrases. For NP phrases, we automatically add extra brackets to delimit the coordination phrase span so that it includes only the coordinated modifiers. For the rest of the phrases we found that an such automatic procedure was not feasible (consider the ADVP phrases: (ADVP (RBR farther) (CC and) (RBR farther) (RB apart)) ; (ADVP (RB up) (CC and) (RB down) (NP (NNP Florida))). The first phrase head is apart while in the second phrase, Florida is a complement). We manually annotated the coordination phrase boundary in these cases. When adding an extra tree level in this cases, we set its syntactic label to UCP when the conjuncts are from different types and same as the conjuncts label when the conjuncts are from the same type.6 5.3 Annotating roles within coordination phrases Cases where there are only a coordinator and two siblings in the coordinated phrase are trivial to automatically annotate, marking both siblings as conjuncts: 6When the conjuncts are in POS level, a corresponding syntactic label is set. For example: (NP-CCP (NN-COORD head) (CC-CC and) (NNS-COORD shoulders)) ADVP-CCP ADVP-COORD later this week CC or ADVP-COORD early next week To categorize the phrase elements for the rest of the coordination phrases, we first manually marked the conjuncts in the sentence (for flat structures, the conjuncts were already annotated in the internal structure annotation phase). The annotator was given a sentence where the coordinator and the coordination phrase boundaries are marked. For example “Coke has been able to improve (bottlers’ efficiency and production, {and} in some cases, marketing)”. The annotation task was to mark the conjuncts.7 We automatically concluded the types of the other elements according to their relative position – elements before or after the conjuncts are categorized as markers/shared, while an element between conjuncts is a connective or the coordinator itself. Mismatches with the PTB phrase boundaries In 5% of the cases of coordination with inner structure, a conjunct span as it was annotated by our annotator was not consistent with the elements spans in the PTB. For example, the annotator provided the following annotation: “(The [economic loss], [jobs lost], [anguish],[frustration] {and} [humiliation]) are beyond measure”, treating the determiner “The” as a shared modifier. In contrast, the PTB analysis considers “The” as part of the first conjunct (“[The economic loss]”). The vast majority of the mismatches were on the point of a specific word such as the (as demonstrated in the above example), to, a and punctuation symbols. In a small number of cases the mismatch was because of an ambiguity. For example, in “The declaration immediately made the counties eligible for (temporary housing, grants {and} low-cost loans to cover uninsured property losses)” the annotator marked “temporary housing”, “grants”, and “low-cost loans” as conjuncts (leaving “to cover uninsured property loss” as a shared 7The coordination phrase boundaries were taken from the PTB annotations and were used to focus the annotators attention, rather than to restrict the annotation. The annotators were allowed to override them if they thought they were erronous. We did not encounter such cases. 839 modifier, while the PTB annotation considers “to cover. . . ” as part of the last conjunct. Following our desiderata of minimizing changes to existing tree structures, in a case of a mismatch we extend the conjunct spans to be consistent with the PTB phrasing (each such case was manually verified). 5.4 Handling inconsistencies and errors We automatically recognize ADVPs that appear right before a VP coordination phrase and markers that are adjunct to a coordination phrase. We change the structure such that such ADVPs and markers appear inside the coordination phrase. Quantity phrases that includes two conjuncts with a second conjunct of more, less, so, two and up are automatically recognized and consolidated by adding an extra level. Errors in conjuncts span are found during the manual annotation that is done for the categorization. When the manual annotation includes a conjunct that is originally a combination of two siblings phrases, we add extra brackets and name the new level according to the syntactic structure. 6 Annotator Agreement We evaluate the resulting corpus with interannotators agreement for coordination phrases with inner structure as well as agreement with the flat conjuncts that were annotated in the NP bracketing annotation effort of Vadas and Curran (2007). 6.1 Inter-annotator agreement To test the inter-annotator agreement, we were assisted with an additional linguist who annotated 1000 out of 7823 coordination phrases with inner structure. We measured the number of coordination phrases where the spans are inconsistent at least in one conjunct. The annotators originally agreed in 92.8% of the sentences. After revision, the agreement increased to 98.1%. The disagreements occurred in semantically ambiguous cases. For instance, “potato salad, baked beans and pudding, plus coffee or iced tea” was tagged differently by the 2 annotators. One considered “pudding” as the last conjunct and the other marked “pudding, plus coffee or iced tea”. 6.2 Agreement with NP Bracketing for flat coordinations The NP Bracketing extension of Vadas and Curran (2007) includes inner structures for flat NP phrases R P F1 PTB + NPB 90.41 86.12 88.21 PTB + NPB + CCP 90.83 91.18 91.01 Table 3: The parser results on section 22. in the PTB, that are given an internal structure using the NML tag. For instance, in (NP (NNP Air) (NNP Force) (NN contract)), “Air Force” is considered as an independent entity and thus is delimited with the NML tag: (NP (NML (NNP Air) (NNP Force)) (NN contract)). As mentioned, 48% (1655 sentences) of the NP flat coordination were disambiguated in this effort.8 For these, the agreement on the conjuncts spans with the way they were marked by our annotators is 88%. The disagreements were in cases where a modifier is ambiguous. For examples consider “luxury” in “The luxury airline and casino company”, “scientific” in “scientific institutions or researchers” and “Japanese” in “some Japanese government officials and businessmen”. In cases of disagreement we followed our annotators decisions.9 7 Experiments We evaluate the impact of the new annotation on the PTB parsing accuracy. We use the stateof-the-art Berkeley parser (Petrov et al., 2006), and compare the original PTB annotations (including Vadas and Curran’s base-NP bracketing – PTB+NPB) to the coordination annotations in this work (PTB+NPB+CCP). We use sections 221 for training, and report accuracies on the traditional dev set (section 22). The parse trees are scored using EVALB (Sekine and Collins, 1997). Structural Changes We start by considering how the changes in tree structures affect the parser performance. We compared the parsing performance when trained and tested on PTB+NPB, to the parsing performance when trained and tested on PTB+NPB+CCP. The new function labels were ignored in both training and testing. The results 8We consider a flat NP coordination as disambiguated if it includes a coordinator and two other elements, i.e.: (NML (NML (NN eye) (NN care)) (CC and) (NML (NN skin) (NN care))) ; (NML (NN buy) (CC or) (NN sell)). 9A by-product of this process is a list of ambiguous modifier attachment cases, which can be used for future research on coordination disambiguation, for example in designing error metrics that take such annotator disagreements into account. 840 Gold Pred CC CCP COORD MARK SHARED CONN None Err CC 849 1 5 CCP 552 1 91 205 COORD 3 1405 2 184 200 MARK 9 2 1 SHARED 1 29 85 3 CONN 1 4 2 None 4 124 113 4 26 14 Table 4: Confusion-matrix over the predicted function labels. None indicate no function label (a constituent which is not directly inside a CCP phrase). Err indicate cases in which the gold span was not predicted by the parser. are presented in Table 3. Parsing accuracy on the coordination-enhanced corpus is higher than on the original trees. However, the numbers are not strictly comparable, as the test sets contain trees with somewhat different number of constituents. To get a fairer comparison, we also evaluate the parsers on the subset of trees in section 22 whose structures did not change. We check two conditions: trees that include coordination, and trees that do not include coordination. Here, we see a small drop in parsing accuracy when using the new annotation. When trained and tested on PTB+NPB+CCP, the parser results are slightly decreased compared to PTB+NPB – from 89.89% F1 to 89.4% F1 for trees with coordination and from 91.78% F1 to 91.75% F1 for trees without coordination. However, the drop is small and it is clear that the changes did not make the corpus substantially harder to parse. We also note that the parsing results for trees including coordinations are lower than those for trees without coordination, highlighting the challenge in parsing coordination structures. Function Labels How good is the parser in predicting the function labels, distinguishing between conjuncts, markers, connectives and shared modifiers? When we train and test the parser on trees that include the function labels, we see a rather large drop in accuracy: from 89.89% F1 (for trees that include a coordination) to 85.27% F1. A closer look reveals that a large part of this drop is superficial: taking function labels into account cause errors in coordination scope to be punished multiple times.10 When we train the parser with 10Consider the gold structure (NP (NP-CCP (DT-MARK a) (NP-COORD b) (CC and) (NP-COORD c) (PP-SHARED d))) and the incorrect prediction (NP (DT a) (NP-CCP (NPfunction labels but ignore them at evaluation time, the results climb back up to 87.45% F1. Furthermore, looking at coordination phrases whose structure was perfectly predicted (65.09% of the cases), the parser assigned the correct function label for all the coordination parts in 98.91% of the cases. The combined results suggest that while the parser is reasonably effective at assigning the correct function labels, there is still work to be done on this form of disambiguation. The availability of function labels annotation allows us to take a finer-grained look at the parsing behavior on coordination. Table 4 lists the parser assigned labels against the gold labels. Common cases of error are (1) conjuncts identification – where 200 out of 1794 gold conjuncts were assigned an incorrect span and 113 non-conjunct spans were predicted as participating as conjuncts in a coordination phrase; and (2) Shared elements identification, where 74.57% of the gold shared elements were analyzed as either out of the coordination phrase or as part of the last coordinates. These numbers suggest possible areas of future research with respect to coordination disambiguation which are likely to provide high gains. 8 Conclusions Coordination is a frequent and important syntactic phenomena, that pose a great challenge to automatic syntactic annotation. Unfortunately, the current state of coordination annotation in the PTB is lacking. We present a version of the PTB with improved annotation for coordination structure. The COORD b) (CC and) (NP-COORD c)) (PP d)). When taking only the syntactic labels into account there is only the mistake of the coordination span. When taking the coordination roles into account, there are two additional mistakes – the missing labels for a and d. 841 new annotation adds structure to the previously flat NPs, unifies inconsistencies, fix errors, and marks the role of different participants in the coordination structure with respect to the coordination. We make our annotation available to the NLP community. This resource is a necessary first step towards better disambiguation of coordination structures in syntactic parsers. Acknowledgments This work was supported by The Allen Institute for Artificial Intelligence as well as the German Research Foundation via the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1). References Ann Bies, Mark Ferguson, Karen Katz, Robert MacIntyre, Victoria Tredinnick, Grace Kim, Mary Ann Marcinkiewicz, and Britta Schasberger. 1995. Bracketing guidelines for treebank ii style penn treebank project. University of Pennsylvania, 97:100. David Dowty. 1988. Type raising, functional composition, and non-constituent conjunction. In Categorial grammars and natural language structures, pages 153–197. Springer. Kazuo Hara, Masashi Shimbo, Hideharu Okuma, and Yuji Matsumoto. 2009. Coordinate structure analysis with global structural constraints and alignmentbased local features. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2Volume 2, pages 967–975. Association for Computational Linguistics. Deirdre Hogan. 2007. Coordinate noun phrase disambiguation in a generative parsing model. Association for Computational Linguistics. Rodney Huddleston, Geoffrey K Pullum, et al. 2002. The cambridge grammar of english. Language. Cambridge: Cambridge University Press, pages 1273–1362. J-D Kim, Tomoko Ohta, Yuka Tateisi, and Junichi Tsujii. 2003. Genia corpusa semantically annotated corpus for bio-textmining. Bioinformatics, 19(suppl 1):i180–i182. Wolfgang Maier, Erhard Hinrichs, Sandra K¨ubler, and Julia Krivanek. 2012. Annotating coordination in the penn treebank. In Proceedings of the Sixth Linguistic Annotation Workshop, pages 166–174. Association for Computational Linguistics. Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313–330. Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 433– 440. Association for Computational Linguistics. Satoshi Sekine and Michael Collins. 1997. Evalb bracket scoring program. URL http://nlp. cs. nyu. edu/evalb/EVALB. tgz. Masashi Shimbo and Kazuo Hara. 2007. A discriminative learning model for coordinate conjunctions. In EMNLP-CoNLL, pages 610–619. David Vadas and James Curran. 2007. Adding noun phrase structure to the penn treebank. In ANNUAL MEETING-ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, volume 45, page 240. 842
2016
79
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 76–85, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Modeling Coverage for Neural Machine Translation Zhaopeng Tu† Zhengdong Lu† Yang Liu‡ Xiaohua Liu† Hang Li† †Noah’s Ark Lab, Huawei Technologies, Hong Kong {tu.zhaopeng,lu.zhengdong,liuxiaohua3,hangli.hl}@huawei.com ‡Department of Computer Science and Technology, Tsinghua University, Beijing [email protected] Abstract Attention mechanism has enhanced stateof-the-art Neural Machine Translation (NMT) by jointly learning to align and translate. It tends to ignore past alignment information, however, which often leads to over-translation and under-translation. To address this problem, we propose coverage-based NMT in this paper. We maintain a coverage vector to keep track of the attention history. The coverage vector is fed to the attention model to help adjust future attention, which lets NMT system to consider more about untranslated source words. Experiments show that the proposed approach significantly improves both translation quality and alignment quality over standard attention-based NMT.1 1 Introduction The past several years have witnessed the rapid progress of end-to-end Neural Machine Translation (NMT) (Sutskever et al., 2014; Bahdanau et al., 2015). Unlike conventional Statistical Machine Translation (SMT) (Koehn et al., 2003; Chiang, 2007), NMT uses a single and large neural network to model the entire translation process. It enjoys the following advantages. First, the use of distributed representations of words can alleviate the curse of dimensionality (Bengio et al., 2003). Second, there is no need to explicitly design features to capture translation regularities, which is quite difficult in SMT. Instead, NMT is capable of learning representations directly from the training data. Third, Long Short-Term Memory (Hochreiter and Schmidhuber, 1997) enables NMT to cap1Our code is publicly available at https://github. com/tuzhaopeng/NMT-Coverage. ture long-distance reordering, which is a significant challenge in SMT. NMT has a serious problem, however, namely lack of coverage. In phrase-based SMT (Koehn et al., 2003), a decoder maintains a coverage vector to indicate whether a source word is translated or not. This is important for ensuring that each source word is translated in decoding. The decoding process is completed when all source words are “covered” or translated. In NMT, there is no such coverage vector and the decoding process ends only when the end-of-sentence mark is produced. We believe that lacking coverage might result in the following problems in conventional NMT: 1. Over-translation: some words are unnecessarily translated for multiple times; 2. Under-translation: some words are mistakenly untranslated. Specifically, in the state-of-the-art attention-based NMT model (Bahdanau et al., 2015), generating a target word heavily depends on the relevant parts of the source sentence, and a source word is involved in generation of all target words. As a result, over-translation and under-translation inevitably happen because of ignoring the “coverage” of source words (i.e., number of times a source word is translated to a target word). Figure 1(a) shows an example: the Chinese word “gu¯anb`ı” is over translated to “close(d)” twice, while “b`eip`o” (means “be forced to”) is mistakenly untranslated. In this work, we propose a coverage mechanism to NMT (NMT-COVERAGE) to alleviate the overtranslation and under-translation problems. Basically, we append a coverage vector to the intermediate representations of an NMT model, which are sequentially updated after each attentive read 76 (a) Over-translation and under-translation generated by NMT. (b) Coverage model alleviates the problems of over-translation and under-translation. Figure 1: Example translations of (a) NMT without coverage, and (b) NMT with coverage. In conventional NMT without coverage, the Chinese word “gu¯anb`ı” is over translated to “close(d)” twice, while “b`eip`o” (means “be forced to”) is mistakenly untranslated. Coverage model alleviates these problems by tracking the “coverage” of source words. during the decoding process, to keep track of the attention history. The coverage vector, when entering into attention model, can help adjust the future attention and significantly improve the overall alignment between the source and target sentences. This design contains many particular cases for coverage modeling with contrasting characteristics, which all share a clear linguistic intuition and yet can be trained in a data driven fashion. Notably, we achieve significant improvement even by simply using the sum of previous alignment probabilities as coverage for each word, as a successful example of incorporating linguistic knowledge into neural network based NLP models. Experiments show that NMT-COVERAGE significantly outperforms conventional attentionbased NMT on both translation and alignment tasks. Figure 1(b) shows an example, in which NMT-COVERAGE alleviates the over-translation and under-translation problems that NMT without coverage suffers from. 2 Background Our work is built on attention-based NMT (Bahdanau et al., 2015), which simultaneously conducts dynamic alignment and generation of the target sentence, as illustrated in Figure 2. It Figure 2: Architecture of attention-based NMT. Whenever possible, we omit the source index j to make the illustration less cluttered. produces the translation by generating one target word yi at each time step. Given an input sentence x = {x1, . . . , xJ} and previously generated words {y1, . . . , yi−1}, the probability of generating next word yi is P(yi|y<i, x) = softmax g(yi−1, ti, si)  (1) where g is a non-linear function, and ti is a decoding state for time step i, computed by ti = f(ti−1, yi−1, si) (2) Here the activation function f(·) is a Gated Recurrent Unit (GRU) (Cho et al., 2014b), and si is 77 a distinct source representation for time i, calculated as a weighted sum of the source annotations: si = J X j=1 αi,j · hj (3) where hj = [−→h ⊤ j ; ←−h ⊤ j ] ⊤is the annotation of xj from a bi-directional Recurrent Neural Network (RNN) (Schuster and Paliwal, 1997), and its weight αi,j is computed by αi,j = exp(ei,j) PJ k=1 exp(ei,k) (4) and ei,j = a(ti−1, hj) = v⊤ a tanh(Wati−1 + Uahj) (5) is an attention model that scores how well yi and hj match. With the attention model, it avoids the need to represent the entire source sentence with a single vector. Instead, the decoder selects parts of the source sentence to pay attention to, thus exploits an expected annotation si over possible alignments αi,j for each time step i. However, the attention model fails to take advantage of past alignment information, which is found useful to avoid over-translation and undertranslation problems in conventional SMT (Koehn et al., 2003). For example, if a source word is translated in the past, it is less likely to be translated again and should be assigned a lower alignment probability. 3 Coverage Model for NMT In SMT, a coverage set is maintained to keep track of which source words have been translated (“covered”) in the past. Let us take x = {x1, x2, x3, x4} as an example of input sentence. The initial coverage set is C = {0, 0, 0, 0} which denotes that no source word is yet translated. When a translation rule bp = (x2x3, ymym+1) is applied, we produce one hypothesis labelled with coverage C = {0, 1, 1, 0}. It means that the second and third source words are translated. The goal is to generate translation with full coverage C = {1, 1, 1, 1}. A source word is translated when it is covered by one translation rule, and it is not allowed to be translated again in the future (i.e., hard coverage). In this way, each source word is guaranteed to be translated and only be translated once. As shown, Figure 3: Architecture of coverage-based attention model. A coverage vector Ci−1 is maintained to keep track of which source words have been translated before time i. Alignment decisions αi are made jointly taking into account past alignment information embedded in Ci−1, which lets the attention model to consider more about untranslated source words. coverage is essential for SMT since it avoids gaps and overlaps in translation of source words. Modeling coverage is also important for attention-based NMT models, since they generally lack a mechanism to indicate whether a certain source word has been translated, and therefore are prone to the “coverage” mistakes: some parts of source sentence have been translated more than once or not translated. For NMT models, directly modeling coverage is less straightforward, but the problem can be significantly alleviated by keeping track of the attention signal during the decoding process. The most natural way for doing that would be to append a coverage vector to the annotation of each source word (i.e., hj), which is initialized as a zero vector but updated after every attentive read of the corresponding annotation. The coverage vector is fed to the attention model to help adjust future attention, which lets NMT system to consider more about untranslated source words, as illustrated in Figure 3. 3.1 Coverage Model Since the coverage vector summarizes the attention record for hj (and therefore for a small neighbor centering at the jth source word), it will discourage further attention to it if it has been heavily attended, and implicitly push the attention to the less attended segments of the source sentence since the attention weights are normalized to one. This can potentially solve both coverage mistakes mentioned above, when modeled and learned properly. 78 Formally, the coverage model is given by Ci,j = gupdate Ci−1,j, αi,j, Φ(hj), Ψ  (6) where • gupdate(·) is the function that updates Ci,j after the new attention αi,j at time step i in the decoding process; • Ci,j is a d-dimensional coverage vector summarizing the history of attention till time step i on hj; • Φ(hj) is a word-specific feature with its own parameters; • Ψ are auxiliary inputs exploited in different sorts of coverage models. Equation 6 gives a rather general model, which could take different function forms for gupdate(·) and Φ(·), and different auxiliary inputs Ψ (e.g., previous decoding state ti−1). In the rest of this section, we will give a number of representative implementations of the coverage model, which either leverage more linguistic information (Section 3.1.1) or resort to the flexibility of neural network approximation (Section 3.1.2). 3.1.1 Linguistic Coverage Model We first consider at linguistically inspired model which has a small number of parameters, as well as clear interpretation. While the linguisticallyinspired coverage in NMT is similar to that in SMT, there is one key difference: it indicates what percentage of source words have been translated (i.e., soft coverage). In NMT, each target word yi is generated from all source words with probability αi,j for source word xj. In other words, the source word xj is involved in generating all target words and the probability of generating target word yi at time step i is αi,j. Note that unlike in SMT in which each source word is fully translated at one decoding step, the source word xj is partially translated at each decoding step in NMT. Therefore, the coverage at time step i denotes the translated ratio of that each source word is translated. We use a scalar (d = 1) to represent linguistic coverage for each source word and employ an accumulate operation for gupdate. The initial value of linguistic coverage is zero, which denotes that the corresponding source word is not translated yet. We iteratively construct linguistic coverages through accumulation of alignment probabilities generated by the attention model, each of which is normalized by a distinct contextdependent weight. The coverage of source word xj at time step i is computed by Ci,j = Ci−1,j + 1 Φj αi,j = 1 Φj i X k=1 αk,j (7) where Φj is a pre-defined weight which indicates the number of target words xj is expected to generate. The simplest way is to follow Xu et al. (2015) in image-to-caption translation to fix Φ = 1 for all source words, which means that we directly use the sum of previous alignment probabilities without normalization as coverage for each word, as done in (Cohn et al., 2016). However, in machine translation, different types of source words may contribute differently to the generation of target sentence. Let us take the sentence pairs in Figure 1 as an example. The noun in the source sentence “j¯ıchˇang” is translated into one target word “airports”, while the adjective “b`eip`o” is translated into three words “were forced to”. Therefore, we need to assign a distinct Φj for each source word. Ideally, we expect Φj = PI i=1 αi,j with I being the total number of time steps in decoding. However, such desired value is not available before decoding, thus is not suitable in this scenario. Fertility To predict Φj, we introduce the concept of fertility, which is firstly proposed in wordlevel SMT (Brown et al., 1993). Fertility of source word xj tells how many target words xj produces. In SMT, the fertility is a random variable Φj, whose distribution p(Φj = φ) is determined by the parameters of word alignment models (e.g., IBM models). In this work, we simplify and adapt fertility from the original model and compute the fertility Φj by2 Φj = N(xj|x) = N · σ(Ufhj) (8) where N ∈R is a predefined constant to denote the maximum number of target words one source 2Fertility in SMT is a random variable with a set of fertility probabilities, n(Φj|xj) = p(Φ<j, x), which depends on the fertilities of previous source words. To simplify the calculation and adapt it to the attention model in NMT, we define the fertility in NMT as a constant number, which is independent of previous fertilities. 79 Figure 4: NN-based coverage model. word can produce, σ(·) is a logistic sigmoid function, and Uf ∈R1×2n is the weight matrix. Here we use hj to denote (xj|x) since hj contains information about the whole input sentence with a strong focus on the parts surrounding xj (Bahdanau et al., 2015). Since Φj does not depend on i, we can pre-compute it before decoding to minimize the computational cost. 3.1.2 Neural Network Based Coverage Model We next consider Neural Network (NN) based coverage model. When Ci,j is a vector (d > 1) and gupdate(·) is a neural network, we actually have an RNN model for coverage, as illustrated in Figure 4. In this work, we take the following form: Ci,j = f(Ci−1,j, αi,j, hj, ti−1) where f(·) is a nonlinear activation function and ti−1 is the auxiliary input that encodes past translation information. Note that we leave out the word-specific feature function Φ(·) and only take the input annotation hj as the input to the coverage RNN. It is important to emphasize that the NN-based coverage model is able to be fed with arbitrary inputs, such as the previous attentional context si−1. Here we only employ Ci−1,j for past alignment information, ti−1 for past translation information, and hj for word-specific bias.3 Gating The neural function f(·) can be either a simple activation function tanh or a gating function that proves useful to capture long-distance 3In our preliminary experiments, considering more inputs (e.g., current and previous attentional contexts, unnormalized attention weights ei,j) does not always lead to better translation quality. Possible reasons include: 1) the inputs contains duplicate information, and 2) more inputs introduce more back-propagation paths and therefore make it difficult to train. In our experience, one principle is to only feed the coverage model inputs that contain distinct information, which are complementary to each other. dependencies. In this work, we adopt GRU for the gating activation since it is simple yet powerful (Chung et al., 2014). Please refer to (Cho et al., 2014b) for more details about GRU. Discussion Intuitively, the two types of models summarize coverage information in “different languages”. Linguistic models summarize coverage information in human language, which has a clear interpretation to humans. Neural models encode coverage information in “neural language”, which can be “understood” by neural networks and let them to decide how to make use of the encoded coverage information. 3.2 Integrating Coverage into NMT Although attention based model has the capability of jointly making alignment and translation, it does not take into consideration translation history. Specifically, a source word that has significantly contributed to the generation of target words in the past, should be assigned lower alignment probabilities, which may not be the case in attention based NMT. To address this problem, we propose to calculate the alignment probabilities by incorporating past alignment information embedded in the coverage model. Intuitively, at each time step i in the decoding phase, coverage from time step (i −1) serves as an additional input to the attention model, which provides complementary information of that how likely the source words are translated in the past. We expect the coverage information would guide the attention model to focus more on untranslated source words (i.e., assign higher alignment probabilities). In practice, we find that the coverage model does fulfill the expectation (see Section 5). The translated ratios of source words from linguistic coverages negatively correlate to the corresponding alignment probabilities. More formally, we rewrite the attention model in Equation 5 as ei,j = a(ti−1, hj, Ci−1,j) = v⊤ a tanh(Wati−1 + Uahj + VaCi−1,j) where Ci−1,j is the coverage of source word xj before time i. Va ∈Rn×d is the weight matrix for coverage with n and d being the numbers of hidden units and coverage units, respectively. 80 4 Training We take end-to-end learning for the NMTCOVERAGE model, which learns not only the parameters for the “original” NMT (i.e., θ for encoding RNN, decoding RNN, and attention model) but also the parameters for coverage modeling (i.e., η for annotation and guidance of attention) . More specifically, we choose to maximize the likelihood of reference sentences as most other NMT models (see, however (Shen et al., 2016)): (θ∗, η∗) = arg max θ,η N X n=1 log P(yn|xn; θ, η) (9) No auxiliary objective For the coverage model with a clearer linguistic interpretation (Section 3.1.1), it is possible to inject an auxiliary objective function on some intermediate representation. More specifically, we may have the following objective: (θ∗, η∗) = arg max θ,η N X n=1 ( log P(yn|xn; θ, η) −λ n J X j=1 (Φj − I X i=1 αi,j)2; η o) where the term  PJ j=1(Φj −PI i=1 αi,j)2; η penalizes the discrepancy between the sum of alignment probabilities and the expected fertility for linguistic coverage. This is similar to the more explicit training for fertility as in Xu et al. (2015), which encourages the model to pay equal attention to every part of the image (i.e., Φj = 1). However, our empirical study shows that the combined objective consistently worsens the translation quality while slightly improves the alignment quality. Our training strategy poses less constraints on the dependency between Φj and the attention than a more explicit strategy taken in (Xu et al., 2015). We let the objective associated with the translation quality (i.e., the likelihood) to drive the training, as in Equation 9. This strategy is arguably advantageous, since the attention weight on a hidden state hj cannot be interpreted as the proportion of the corresponding word being translated in the target sentence. For one thing, the hidden state hj, after the transformation from encoding RNN, bears the contextual information from other parts of the source sentence, and thus loses the rigid correspondence with the corresponding word. Therefore, penalizing the discrepancy between the sum of alignment probabilities and the expected fertility does not hold in this scenario. 5 Experiments 5.1 Setup We carry out experiments on a Chinese-English translation task. Our training data for the translation task consists of 1.25M sentence pairs extracted from LDC corpora4 , with 27.9M Chinese words and 34.5M English words respectively. We choose NIST 2002 dataset as our development set, and the NIST 2005, 2006 and 2008 datasets as our test sets. We carry out experiments of the alignment task on the evaluation dataset from (Liu and Sun, 2015), which contains 900 manually aligned Chinese-English sentence pairs. We use the caseinsensitive 4-gram NIST BLEU score (Papineni et al., 2002) for the translation task, and the alignment error rate (AER) (Och and Ney, 2003) for the alignment task. To better estimate the quality of the soft alignment probabilities generated by NMT, we propose a variant of AER, naming SAER: SAER = 1 −|MA × MS| + |MA × MP | |MA| + |MS| where A is a candidate alignment, and S and P are the sets of sure and possible links in the reference alignment respectively (S ⊆P). M denotes alignment matrix, and for both MS and MP we assign the elements that correspond to the existing links in S and P with probabilities 1 while assign the other elements with probabilities 0. In this way, we are able to better evaluate the quality of the soft alignments produced by attention-based NMT. We use sign-test (Collins et al., 2005) for statistical significance test. For efficient training of the neural networks, we limit the source and target vocabularies to the most frequent 30K words in Chinese and English, covering approximately 97.7% and 99.3% of the two corpora respectively. All the out-of-vocabulary words are mapped to a special token UNK. We set N = 2 for the fertility model in the linguistic coverages. We train each model with the sentences of length up to 80 words in the training data. The word embedding dimension is 620 and the size of a hidden layer is 1000. All the other settings are the same as in (Bahdanau et al., 2015). 4The corpora include LDC2002E18, LDC2003E07, LDC2003E14, Hansards portion of LDC2004T07, LDC2004T08 and LDC2005T06. 81 # System #Params MT05 MT06 MT08 Avg. 1 Moses – 31.37 30.85 23.01 28.41 2 GroundHog 84.3M 30.61 31.12 23.23 28.32 3 + Linguistic coverage w/o fertility +1K 31.26† 32.16†‡ 24.84†‡ 29.42 4 + Linguistic coverage w/ fertility +3K 32.36†‡ 32.31†‡ 24.91†‡ 29.86 5 + NN-based coverage w/o gating (d = 1) +4K 31.94†‡ 32.11†‡ 23.31 29.12 6 + NN-based coverage w/ gating (d = 1) +10K 31.94†‡ 32.16†‡ 24.67†‡ 29.59 7 + NN-based coverage w/ gating (d = 10) +100K 32.73†‡ 32.47†‡ 25.23†‡ 30.14 Table 1: Evaluation of translation quality. d denotes the dimension of NN-based coverages, and † and ‡ indicate statistically significant difference (p < 0.01) from GroundHog and Moses, respectively. “+” is on top of the baseline system GroundHog. We compare our method with two state-of-theart models of SMT and NMT5: • Moses (Koehn et al., 2007): an open source phrase-based translation system with default configuration and a 4-gram language model trained on the target portion of training data. • GroundHog (Bahdanau et al., 2015): an attention-based NMT system. 5.2 Translation Quality Table 1 shows the translation performances measured in BLEU score. Clearly the proposed NMTCOVERAGE significantly improves the translation quality in all cases, although there are still considerable differences among different variants. Parameters Coverage model introduces few parameters. The baseline model (i.e., GroundHog) has 84.3M parameters. The linguistic coverage using fertility introduces 3K parameters (2K for fertility model), and the NN-based coverage with gating introduces 10K×d parameters (6K×d for gating), where d is the dimension of the coverage vector. In this work, the most complex coverage model only introduces 0.1M additional parameters, which is quite small compared to the number of parameters in the existing model (i.e., 84.3M). Speed Introducing the coverage model slows down the training speed, but not significantly. When running on a single GPU device Tesla K80, the speed of the baseline model is 960 target words per second. System 4 (“+Linguistic coverage with fertility”) has a speed of 870 words per second, while System 7 (“+NN-based coverage (d=10)”) achieves a speed of 800 words per second. 5There are recent progress on aggregating multiple models or enlarging the vocabulary(e.g., in (Jean et al., 2015)), but here we focus on the generic models. Linguistic Coverages (Rows 3 and 4): Two observations can be made. First, the simplest linguistic coverage (Row 3) already significantly improves translation performance by 1.1 BLEU points, indicating that coverage information is very important to the attention model. Second, incorporating fertility model boosts the performance by better estimating the covered ratios of source words. NN-based Coverages (Rows 5-7): (1) Gating (Rows 5 and 6): Both variants of NN-based coverages outperform GroundHog with averaged gains of 0.8 and 1.3 BLEU points, respectively. Introducing gating activation function improves the performance of coverage models, which is consistent with the results in other tasks (Chung et al., 2014). (2) Coverage dimensions (Rows 6 and 7): Increasing the dimension of coverage models further improves the translation performance by 0.6 point in BLEU score, at the cost of introducing more parameters (e.g., from 10K to 100K).6 5.3 Alignment Quality Table 2 lists the alignment performances. We find that coverage information improves attention model as expected by maintaining an annotation summarizing attention history on each source word. More specifically, linguistic coverage with fertility significantly reduces alignment errors under both metrics, in which fertility plays an important role. NN-based coverages, however, does not significantly reduce alignment errors until increasing the coverage dimension from 1 to 10. It indicates that NN-based models need slightly more 6In a pilot study, further increasing the coverage dimension only slightly improved the translation performance. One possible reason is that encoding the relatively simple coverage information does not require too many dimensions. 82 (a) Groundhog (b) + NN cov. w/ gating (d = 10) Figure 5: Example alignments. Using coverage mechanism, translated source words are less likely to contribute to generation of the target words next (e.g., top-right corner for the first four Chinese words.). System SAER AER GroundHog 67.00 54.67 + Ling. cov. w/o fertility 66.75 53.55 + Ling. cov. w/ fertility 64.85 52.13 + NN cov. w/o gating (d = 1) 67.10 54.46 + NN cov. w/ gating (d = 1) 66.30 53.51 + NN cov. w/ gating (d = 10) 64.25 50.50 Table 2: Evaluation of alignment quality. The lower the score, the better the alignment quality. dimensions to encode the coverage information. Figure 5 shows an example. The coverage mechanism does meet the expectation: the alignments are more concentrated and most importantly, translated source words are less likely to get involved in generation of the target words next. For example, the first four Chinese words are assigned lower alignment probabilities (i.e., darker color) after the corresponding translation “romania reinforces old buildings” is produced. 5.4 Effects on Long Sentences Following Bahdanau et al. (2015), we group sentences of similar lengths together and compute BLEU score and averaged length of translation for each group, as shown in Figure 6. Cho et al. (2014a) show that the performance of Groundhog drops rapidly when the length of input sentence increases. Our results confirm these findings. One main reason is that Groundhog produces much shorter translations on longer sentences (e.g., > 40, see right panel in Figure 6), and thus faces a serious under-translation problem. NMT-COVERAGE alleviates this problem by incorporating coverage information into the attention model, which in general pushes the attention to untranslated parts of the source sentence and implicitly discourages early stop of decoding. It is worthy to emphasize that both NN-based coverages (with gating, d = 10) and linguistic coverages (with fertility) achieve similar performances on long sentences, reconfirming our claim that the two variants improve the attention model in their own ways. As an example, consider this source sentence in the test set: qi´aod¯an bˇen s`aij`ı p´ıngj¯un d´ef¯en 24.3f¯en , t¯a z`ai s¯an zh¯ou qi´an ji¯esh`ou shˇoush`u , qi´udu`ı z`ai cˇı q¯ıji¯an 4 sh`eng 8 f`u . Groundhog translates this sentence into: jordan achieved an average score of eight weeks ahead with a surgical operation three weeks ago . in which the sub-sentence “, qi´udu`ı z`ai cˇı q¯ıji¯an 4 sh`eng 8 f`u” is under-translated. With the (NNbased) coverage mechanism, NMT-COVERAGE translates it into: jordan ’s average score points to UNK this year . he received surgery before three weeks , with a team in the period of 4 to 8 . 83 Figure 6: Performance of the generated translations with respect to the lengths of the input sentences. Coverage models alleviate under-translation by producing longer translations on long sentences. in which the under-translation is rectified. The quantitative and qualitative results show that the coverage models indeed help to alleviate under-translation, especially for long sentences consisting of several sub-sentences. 6 Related Work Our work is inspired by recent works on improving attention-based NMT with techniques that have been successfully applied to SMT. Following the success of Minimum Risk Training (MRT) in SMT (Och, 2003), Shen et al. (2016) proposed MRT for end-to-end NMT to optimize model parameters directly with respect to evaluation metrics. Based on the observation that attentionbased NMT only captures partial aspects of attentional regularities, Cheng et al. (2016) proposed agreement-based learning (Liang et al., 2006) to encourage bidirectional attention models to agree on parameterized alignment matrices. Along the same direction, inspired by the coverage mechanism in SMT, we propose a coverage-based approach to NMT to alleviate the over-translation and under-translation problems. Independent from our work, Cohn et al. (2016) and Feng et al. (2016) made use of the concept of “fertility” for the attention model, which is similar in spirit to our method for building the linguistically inspired coverage with fertility. Cohn et al. (2016) introduced a feature-based fertility that includes the total alignment scores for the surrounding source words. In contrast, we make prediction of fertility before decoding, which works as a normalizer to better estimate the coverage ratio of each source word. Feng et al. (2016) used the previous attentional context to represent implicit fertility and passed it to the attention model, which is in essence similar to the input-feed method proposed in (Luong et al., 2015). Comparatively, we predict explicit fertility for each source word based on its encoding annotation, and incorporate it into the linguistic-inspired coverage for attention model. 7 Conclusion We have presented an approach for enhancing NMT, which maintains and utilizes a coverage vector to indicate whether each source word is translated or not. By encouraging NMT to pay less attention to translated words and more attention to untranslated words, our approach alleviates the serious over-translation and under-translation problems that traditional attention-based NMT suffers from. We propose two variants of coverage models: linguistic coverage that leverages more linguistic information and NN-based coverage that resorts to the flexibility of neural network approximation . Experimental results show that both variants achieve significant improvements in terms of translation quality and alignment quality over NMT without coverage. 84 Acknowledgement This work is supported by China National 973 project 2014CB340301. Yang Liu is supported by the National Natural Science Foundation of China (No. 61522204) and the 863 Program (2015AA011808). We thank the anonymous reviewers for their insightful comments. References [Bahdanau et al.2015] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. ICLR 2015. [Bengio et al.2003] Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. JMLR. [Brown et al.1993] Peter E. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311. [Cheng et al.2016] Yong Cheng, Shiqi Shen, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Agreement-based Joint Training for Bidirectional Attention-based Neural Machine Translation. In IJCAI 2016. [Chiang2007] David Chiang. 2007. Hierarchical phrase-based translation. CL. [Cho et al.2014a] Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014a. On the properties of neural machine translation: encoder–decoder approaches. In SSST 2014. [Cho et al.2014b] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014b. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In EMNLP 2014. [Chung et al.2014] Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv. [Cohn et al.2016] Trevor Cohn, Cong Duy Vu Hoang, Ekaterina Vylomova, Kaisheng Yao, Chris Dyer, and Gholamreza Haffari. 2016. Incorporating Structural Alignment Biases into an Attentional Neural Translation Model. In NAACL 2016. [Collins et al.2005] Michael Collins, Philipp Koehn, and Ivona Kuˇcerov´a. 2005. Clause restructuring for statistical machine translation. In ACL 2005. [Feng et al.2016] Shi Feng, Shujie Liu, Mu Li, and Ming Zhou. 2016. Implicit distortion and fertility models for attention-based encoder-decoder nmt model. arXiv. [Hochreiter and Schmidhuber1997] Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation. [Jean et al.2015] S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural machine translation. In ACL 2015. [Koehn et al.2003] Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In NAACL 2003. [Koehn et al.2007] Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: open source toolkit for statistical machine translation. In ACL 2007. [Liang et al.2006] Percy Liang, Ben Taskar, and Dan Klein. 2006. Alignment by agreement. In NAACL 2006. [Liu and Sun2015] Yang Liu and Maosong Sun. 2015. Contrastive unsupervised word alignment with nonlocal features. In AAAI 2015. [Luong et al.2015] Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neural machine translation. In EMNLP 2015. [Och and Ney2003] Franz J. Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. [Och2003] Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In ACL 2003. [Papineni et al.2002] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL 2002. [Schuster and Paliwal1997] Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673–2681. [Shen et al.2016] Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum Risk Training for Neural Machine Translation. In ACL 2016. [Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence learning with neural networks. In NIPS 2014. [Xu et al.2015] Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. 2015. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. In ICML 2015. 85
2016
8
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 843–854, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Analyzing Biases in Human Perception of User Age and Gender from Text Lucie Flekova∗ Ubiquitous Knowledge Processing Lab Department of Computer Science Technische Universit¨at Darmstadt www.ukp.tu-darmstadt.de Jordan Carpenter and Salvatore Giorgi Positive Psychology Center University of Pennsylvania Lyle Ungar and Daniel Preot¸iuc-Pietro Computer & Information Science University of Pennsylvania Abstract User traits disclosed through written text, such as age and gender, can be used to personalize applications such as recommender systems or conversational agents. However, human perception of these traits is not perfectly aligned with reality. In this paper, we conduct a large-scale crowdsourcing experiment on guessing age and gender from tweets. We systematically analyze the quality and possible biases of these predictions. We identify the textual cues which lead to miss-assessments of traits or make annotators more or less confident in their choice. Our study demonstrates that differences between real and perceived traits are noteworthy and elucidates inaccurately used stereotypes in human perception. 1 Introduction There are notable differences between actual user traits and their perception by others (John and Robins, 1994; Kobrynowicz and Branscombe, 1997). Assessments of the perceived traits are dependent, for example, on the interpretation skills of a judge (Kenny and Albright, 1987) and the ability of users to deliberately adjust their behavior to the way they intend to be perceived e.g., for following a social goal (Kanellakos, 2002). People typically use stereotypes – a set of beliefs, generalizations, and associations about a social group – to make judgements about others. The discrepancy between stereotypes and actual group differences is ∗Project carried out during a research stay at the University of Pennsylvania an important topic in psychological research (Eagly, 1995; Dovidio et al., 1996; John and Robins, 1994; Kobrynowicz and Branscombe, 1997). Such differences are likely reflected through one’s writing. With the Internet a substantial part of daily life, users leave enough footprints which allow algorithms to learn a range of individual traits, some with even higher accuracy than the users’ own family (Youyou et al., 2015). With an increase in readily available user generated content, prediction of user attributes has become more popular than ever. Researchers built learning models to infer different user traits from text, such as age (Rao et al., 2010), gender (Burger et al., 2011; Flekova and Gurevych, 2013), location (Eisenstein et al., 2010), political orientation (Volkova et al., 2014), income (Preot¸iucPietro et al., 2015c), socio-economic status (Lampos et al., 2016), popularity (Lampos et al., 2014), personality (Schwartz et al., 2013) or mental illnesses (De Choudhury et al., 2013; Coppersmith et al., 2014; Preot¸iuc-Pietro et al., 2015a). Prediction models are trained on large data sets with labels extracted either from user selfreports (Preot¸iuc-Pietro et al., 2015b) or perceived from annotations (Volkova et al., 2015; Volkova and Bachrach, 2015). The former is useful in obtaining accurate prediction models for unknown users while the latter is more suitable in applications that interact with humans. Previous studies showed the implications of perceived individual traits to the believability and likability of autonomous agents (Bates, 1994; Loyall and Bates, 1997; Baylor and Kim, 2004). This study aims to emphasize the differences between real user traits and how these are perceived by humans from Twitter posts. In this context, we address the following research questions: 843 • How accurate are people at judging traits of other users? • Are there systematic biases humans are subject to? • What are the implications of using human perception as a proxy for truth? • Which textual cues lead to a false perception of the truth? • Which textual cues make people more or less confident in their ratings? We use age and gender as target traits for our analysis, as these are considered basic categories in person assessment (Quinn and Macrae, 2005) and are highly studied by previous research. Using a large-scale crowdsourcing experiment, we demonstrate that human annotators are generally accurate in assessing the traits of others. However, they make systematically different types of errors compared to a prediction model trained using the bag-of-words assumption. This hints at the fact that annotators over-emphasize some linguistic features based on their stereotypes. We show how this phenomenon can be leveraged to improve prediction performance and demonstrate that by replacing selfreports with perceived annotations we introduce systematic biases into our models. In our analysis section, we directly test the accuracy of these stereotypes, as the human predictions must rely on these theories of relative differences between groups if no explicit cues are mentioned. We uncover remarkable differences between actual and perceived traits by using multiple lexical features: unigrams, clusters of words built from word embeddings and emotions expressed through posts. In our analysis of features that lead to wrong assessments we uncover that humans mostly rely on accurate stereotypes from textual cues, but sometimes over-emphasize them. For example, annotators assume that males post more than they do about sports and business, females show more joy, older users more interest in politics and younger users use more slang and are more self-referential. Similarly, we highlight the textual features which lead to higher self-reported confidence in guesses, such as the mentions of family and beauty products for gender or college and school related topics for age. 2 Related Work Studying gender differences has been a popular psychological interest over the past decades (Gleser et al., 1959; McMillan et al., 1977). Traditional studies worked on small data sets, which sometimes led to contradictory results – (Mulac et al., 1990) cf. (Pennebaker et al., 2003). Over the past years, researchers discovered a wide range of gender differences using large collections of data from social media or books combined with more sophisticated techniques. For example, Schler et al. (2006) apply machine learning techniques to a corpus of 37,478 blogs from the Blogger platform and find differences in the topics males and females discuss. Newman et al. (2008) showed that female authors are more likely to include pronouns, verbs, references to home, family, friends and to various emotions. Male authors use longer words, more articles, prepositions and numbers. Topical differences include males writing more about current concerns (e.g., money, leisure or sports). More recent author profiling experiments (Rangel et al., 2014; Rangel et al., 2015) revealed that gender can be well predicted from a large spectrum of textual features, ranging from paraphrase choice (Preot¸iuc-Pietro et al., 2016), emotions (Volkova and Bachrach, 2016), part-of-speech (Johannsen et al., 2015) and abbreviation usage to social network metadata, web traffic (Culotta et al., 2015), apps installed (Seneviratne et al., 2015) or Facebook likes (Kosinski et al., 2013). Bamman et al. (2014) also examine individuals whose language does not match their automatically predicted gender. Most of these experiments were based on self-reported gender in social media profiles. The relationship between age and language has also been extensively studied by both psychologists and computational linguists. Schler et al. (2006) automatically classified blogposts into three age groups based on self-reported age using features from the Linguistic Inquiry and Word Count Framework (Pennebaker et al., 2001), online slang and part-of-speech information. Rosenthal and McKeown (2011) analyzed how both stylistic and lexical cues relate to gender on blogs. On Twitter, Nguyen et al. (2013) analyzed the relationship between language use and age, modelled as a continuous variable. They found similar language usage trends for both genders, with increasing word and tweet length with age, and an increasing tendency to write more grammatically correct, standardized 844 text. Flekova et al. (2016) identified age specific differences in writing style and analyzed their impact beyond income. Recently, Nguyen et al. (2014) showed that age prediction is more difficult as age increases, specifically over 30 years. Hovy and Søgaard (2015) showed that the author age is a factor influencing training part-of-speech taggers. Recent results on social media data report a performance of over 90% for gender classification and a correlation of r ∼0.85 for age prediction (Sap et al., 2014). However, authors can introduce their biases in text (Recasens et al., 2013). Accurate prediction of the true user traits is important for applications such as recommender systems (Braunhofer et al., 2015) or medical diagnoses (Chattopadhyay et al., 2011). Influencing perceived traits, on the other hand, enables a whole different range of applications - for example, researchers demonstrated that the perceived demographics influence student attitude towards a tutor (Baylor and Kim, 2004; Rosenberg-Kima et al., 2008). Perception alterations do not only strive for likeability - people intentionally use linguistic nuances to express social power (Kanellakos, 2002), which can be recognized by computational means (Bramsen et al., 2011). McConnell and Fazio (1996) show how gender-marked language colors the perception of target personality characteristics – enhanced accessibility of masculine and feminine attributes brought about by frequent exposure to occupation title suffixes influences the inferences drawn about the target person. 3 Data In this study, we focus on analyzing human perception of two user traits: gender and age. For judging, we build data sets using publicly available Twitter posts from users with known self-reported age and gender. To study gender, we use the users from Burger et al. (2011), which are mapped to their self-identified gender as mentioned in other user public profiles linked to their Twitter account. This data set consists of 67,337 users, from which we subsample 2,607 users for human assessment. The age data set consists of 826 users that selfreported their year of birth and Twitter handle as part of an online survey. We use the Twitter API to download up to 3200 tweets from these users. These are filtered for English language using an automatic method (Lui and Baldwin, 2012) and duplicate tweets are eliminated (i.e., having the same first 6 tokens) as these are usually generated automatically by apps. Tweet URLs and @-mentions are anonymized as they may contain sensitive information or cues external to language use. For human assessment, we randomly select 100 tweets posted in the same 6 month time interval from the users where gender is known. For the users of known age we randomly select 100 tweets posted during the year 2015. 4 Experimental Setup We use Amazon Mechanical Turk to create crowdsourcing tasks for predicting age and gender from tweets. Each HIT consists of 20 tweets randomly sampled from the pool of 100 tweets of a single user. Each user was assessed independently by 9 different annotators. Using only these tweets as cues, the annotators were asked to predict either age (integer value) or gender (forced choice binary male/female) and self-rate the confidence of their guess on a scale from 1 (not at all confident) to 5 (very confident). Participants received a small compensation (.02$) for each rating and could repeat the task as many times as they wished, but never for the same author. They were also presented with an initial bonus (.25$) and a similar one upon completing a number of guesses. For quality control, we used a set of HITs where the user’s age or gender was explicitly stated within the top 10 tweets displayed in the task. The control HIT appeared 10% of the time and all annotators missing the correct answer twice were excluded from annotation and all their HITs invalidated. A total of 28 annotators were banned from the study. Further, we limited annotator location to the US and they had to spend at least 10 seconds on each HIT before they were allowed to submit their guess. 5 Crowdsourcing Results We first analyze the annotator performance on the gender and age prediction tasks from text. For gender, individual ratings have an overall accuracy of 75.7% (78.3% for females and 72.8% for males). The pairwise inter-annotator agreement for 9 annotators is 70.0%, Fleiss’ Kappa 39.6% and Krippendorf’s Alpha 39.6%, while keeping in mind that the annotators are not the same for all Twitter users. In terms of confidence, average self-rated confidence for correct guesses is µ = 3.47, while average confidence for wrong guesses is µ = 2.84. In total, 845 1083 individual annotators performed an average of µ = 22.3 ratings with the standard deviation σ = 32.76 and the median of 12. We use the majority vote as the method of label aggregation for gender prediction. The majority vote accuracy on predicting the gender of Twitter users is 85.8% with the majority class baseline being 51.9% female, a result comparable to a previous study (Nguyen et al., 2014). Table 1a presents the gender confusion matrix. Female users were more often classified into a correct class (88.3% recall for females cf. 83.5% for males). The majority of errors was caused by male users mislabeled as female. This results in higher precision on classifying male users (86.9% cf. 85.3% for females). In terms of overall self-reported confidence of the annotators, decisions on actual female users were on average more confidently rated (µ = 3.60) compared to males (µ = 3.31), which is in consensus with higher accuracy for females. Figure 2 shows the relationship between annotation accuracy and average confidence per Twitter users. The relationship is non-linear, with the average confidence in the 1–3 range for gender having little impact on the prediction accuracy. For the age annotations, the correlation between predicted and real age for individual ratings is r = 0.416. The mean absolute error (MAE) is 7.31, while the baseline MAE obtained if predicted the sample mean real age is 8.61. The intraclass correlation coefficient between the 9 ratings is 0.367 and taking into account the fact that the annotators were different across users (Shrout and Fleiss, 1979), while the average standard deviation of the 9 user guesses for a single Twitter user is σ = 5.60. Individual rating confidence and the Mean Absolute Error (MAE) are anti-correlated with r = −0.112, matching the expectation that higher self-reported confidence leads to lower errors. The 691 different annotators performed on average µ = 10.68 ratings with standard deviation σ = 21.95 and a median of only 4 ratings. Based on feedback, this was due to the difficulty of the age task. In the rest of the age experiments, we consider the predicted age of a user as a mean of the 9 human guesses. Overall, the correlation between average predicted age and real age is r = 0.631. The MAE of the average predicted age is 6.05. MAE and average self-rated confidence by user are negatively correlated with r = −0.21. Figure 3 plots annotation confidence on a Twitter user level and MAE of 10 20 30 40 50 20 30 40 50 60 70 Real Age Perceived Age Figure 1: Real age predictions compared to average predicted age. The line shows a LOESS fit. age guesses. Again, the relationship between confidence and MAE is non-linear, with confidences of 1–2 having similar average MAE, with the error decreasing as the average of the confidence ratings per author is higher. Figure 1 shows a scatter plot comparing real and predicted age together with a non-linear fit of the data. From this figure, we observe that annotators under-predict age, especially for older users. The correlation of MAE with real age is very high (r = 0.824) and the residuals are not normally distributed. Figures 4 and 5 show the accuracy if only a subsample of the ratings is used and the labels are aggregated using majority vote for gender and using average ratings for age. For gender, we notice that accuracy abruptly increases from 1 to 3 votes and to a lesser extent from 3 to 5 votes, but the differences between 5, 7 and 9 votes are very small. Similarly, for age, MAE decreases up until using 4 guesses, where it reaches a plateau. These experiments suggest that a human perception accuracy can be sufficiently approximated using up to 5 ratings - additional annotations after this point have negligible contribution. Finally, the individual annotator accuracy is independent on the number of users rated. For gender, the Pearson correlation between accuracy and number of ratings performed is r = .009 (p = .75) and for age the Pearson correlation between MAE and the number of ratings performed by a user is r = −.013 (p = .71). This holds even when excluding users who performed few ratings. 6 Uncovering Systematic Biases In this section, we use the extended gender data set in order to investigate if human guesses contain systematic biases by comparing these guesses to those from a bag-of-words prediction model. We then test what is the impact of using human guesses as labels and if human ratings offer additional in846 0.5 0.6 0.7 0.8 0.9 1.0 1 2 3 4 5 Avg. Confidence / User Fraction Correct Figure 2: Gender – Fraction of correct guesses as a function of average confidence per rated Twitter user. Black line shows a LOESS fit. 0 5 10 15 20 1 2 3 4 5 Avg. Confidence / User MAE Figure 3: Age – Mean Absolute Error as a function of average confidence per rated Twitter user. Black line shows a LOESS fit. 70 75 80 85 90 1 3 5 7 9 No.Votes Accuracy Figure 4: Gender – Majority vote accuracy based on number of annotator guesses aggregated. 5.5 6.0 6.5 7.0 7.5 1 3 5 7 9 No.Votes MAE Figure 5: Age – Average Mean Absolute Error based on number of annotator guesses aggregated. formation to predictive models.1 6.1 Comparison to Bag-of-Words Predictions First, we test the hypothesis that annotators emphasize certain stereotypical words to make their guesses. To study their impact, we compare human guesses with those from a statistical model using the bag-of-words assumption for systematic differences. The automatic prediction method using 1Experiments for age could not be replicated due to insufficient labeled users. bag-of-words text features offers a generalisation of individual word usage patterns shielded from biases. We use Support Vector Machines (SVM) with a linear kernel and ℓ1 regularization (Tibshirani, 1996), similarly to the state-of-the-art method in predicting user age and gender (Sap et al., 2014). The features for these models are unigram frequency distributions computed over the aggregate set of messages from each user. Due to the sparse and large vocabulary of social media data, we limit the unigrams to those used by at least 1% of users. We train a classifier on a balanced set of 11,196 Twitter users from our extended data set. We test on the 2,607 users rated by the annotators using only the 100 tweets the humans had access when making their predictions. Table 1b shows the system performance reaching an accuracy of 82.9%, with the human performance on the same data at 85.88%. In contrast to the human prediction, the precision is higher for classifying females (84.9% cf. 80.9% for males) and the recall is higher for males (85.4% cf. 80.4% for female). This is caused by both higher classifier accuracy for males and by a switch in rank between the type I and type II errors. In Table 1c we directly compare the human and automatic predictions, highlighting that 13.6% of the labels are different. Moreover, there is an asymmetry between the tendency of humans to mislabel males with females and the classifier. This leads to the conclusion that humans are sensitive to biases which we will qualitatively investigate in the following sections. 6.2 Human Predictions as Labels Previously, we have shown that perceived annotated traits are different in many aspects to actual traits. To quantify their impact, we use these labels for training two classifiers and compare them on predicting the true gender for unseen users. Both systems are trained on the 260,700 messages from 2,607 users and only differ in the labels assigned to users: majority annotator vote or self-reports. Results on the held-out set of 11,196 users (of which 6,851 males and 7,596 females) are presented in Table 2. The system trained on real labels outperforms that trained on perceived ones (accuracy of 85.32% cf. 83.40%). Furthermore, in the system trained on perceived labels, the same type of error as for the human annotation is more prevalent and is overemphasized compared to our 847 (a) Majority annotator vote. Pred.H Male Female Real Male 40.1% 7.9% Female 6.1% 45.8% (b) Classifier. Pred.C Male Female Real Male 42.2% 7.2% Female 9.9% 40.7% (c) Classifier compared to majority annotator vote. Pred.H Pred.C Male Female Male 40.3% 8.0% Female 5.6% 46.1% Table 1: Normalized confusion matrices of human annotations (Pred.H) to ground truth (Real), classifier performance (Pred.C) to ground truth (Real), and human annotations (Pred.H) to classifier performance (Pred.C) on the same data set. previous results – males are predicted with high precision (85%) but low recall (79%) and many of them are misclassified as women. In the system trained on ground truth, both types of errors are more balanced with more males classified correctly – similar precision (84%) but higher recall (86%). 6.3 Combining Human and Automatic Predictions We have shown that human perceived labels and automatic methods capture different information. This information may be leveraged to obtain better overall predicting performance. We test this by using a linear model that combines two features: the human guesses – measured as the proportion of guesses for female – and classifier prediction – binary value. Even this simple method of label combination obtains a classification accuracy of 87.7%, significantly above majority vote of human guesses (85.8%) and automatic prediction (82.9%) individually. This demonstrates that both methods can complement each other if an increase in accuracy is needed. (a) Trained on perceived gender. Accuracy = 83.4% Pred. Male Female Real Male 37.5% 9.9% Female 6.6% 45.9% (b) Trained on actual gender. Accuracy = 85.3% Pred. Male Female Real Male 40.5% 6.9% Female 7.8% 44.7% Table 2: Normalized confusion matrices for system comparison when using perceived or ground truth labels. 7 Textual Differences between Perceived and Actual Traits We have so far demonstrated that differences exist between the human perception of traits and real traits. Further, human errors differ systematically from a statistical model which generalizes word occurrence patterns. In this section, we directly identify the textual cues that bias humans and cause them to mislabel users. In addition to unigram analysis, in order to aid interpretability of the feature analysis, we group words into clusters of semantically similar words or topics using a method from (Preot¸iuc-Pietro et al., 2015b). We first obtain word representations using the popular skip-gram model with negative sampling introduced by Mikolov et al. (2013) and implemented in the Gensim package (layer size 50, context window 5). We train this model on a separate reference corpus containing ∼400 million tweets. After computing the word vectors, we create a word × word semantic similarity matrix using cosine similarity between the vectors and group the words into clusters using spectral clustering (Shi and Malik, 2000). Each word is only assigned to one cluster. We choose a number of 1,000 topics based on preliminary experiments. Further, we use the NRC Emotion Lexicon (Mohammad and Turney, 2013) to measure eight emotions (anger, fear, anticipation, trust, surprise, sadness, joy and disgust) and two sentiments (negative and positive). A user’s score in each of these 10 dimensions is represented as a weighted sum of its words multiplied by their lexicon score. 7.1 Gender Perception To study gender perception, we first define a measure of perceived gender expression, calculated as the fraction of female guesses out of the 9 guesses for each Twitter user. We then compute univariate correlations the text-derived features and the user 848 Perceived – Female Perceived – Male Topic Perc Real Cont Topic Perc Real Cont such, loving, pretty, beautiful, gorgeous .416 .348 .176 nation, held, rally, defend, supporters -.372 -.281 -.176 bed, couch, blanket, lying, cozy .424 .376 .165 players, teams, crowds, athletes, clubs -.370 -.284 -.171 hair, blonde, shave, eyebrows, dye .379 .325 .152 training, team, field, coach, career -.323 -.246 -.148 friend, boyfriend, bf, bff, gf .365 .308 .149 heat, game, nba, lakers, playoff -.314 -.237 -.145 girl, lucky, she’s, you’re, he’s .378 .336 .143 draft, trade, deadline, stat, retire -.303 -.223 -.143 sweet, angel, honey, pumpkin, bunny .365 .322 .138 ref, offensive, foul, defensive, refs -.324 -.255 -.142 cleaning, laundry, packing, dishes, washing .350 .307 .133 second, third, grade, century, period -.282 -.195 -.142 awake, dream, sleep, asleep, nights .327 .276 .130 former, leader, chief, vice, minister -.316 -.244 -.142 cry, heart, smile, deep, whenever .331 .288 .125 private, claim, jail, removed, banned -.299 -.224 -.138 cake, christmas, gift, cupcakes, gifts .330 .287 .125 war, action, army, battle, zone -.323 -.263 -.135 evening, day, rest, today, sunday .249 .180 .118 security, transition, administration, support -.295 -.225 -.134 light, dark, colors, bright, rainbow .244 .178 .114 general, major, impact, signs, conflict -.295 -.227 -.132 shopping, home, spend, packed, grocery .326 .301 .111 largest, launches, announces, lands, add -.273 -.196 -.132 dreams, live, forget, remember, along .247 .194 .107 guns, planes, riot, weapons, soldiers -.251 -.165 -.131 darling, xo, hugs .259 .211 .106 title, tech, stats, division, technical -.314 -.258 -.129 brother, mom, daddy, daughter, sister .302 .275 .105 breaking, turns, breaks, falls, puts -.266 -.190 -.128 moment, awkward, laugh, excitement, laughter .282 .247 .103 million, billion -.277 -.206 -.128 totally, awesome, favorite, love, fave .272 .233 .103 steve, joe, dave, larry, phil -.294 -.236 -.124 breakfast, dinner, lunch, cooking, meal .280 .245 .103 football, pitch, blues, derby, lineup -.276 -.211 -.124 makeup, glasses, lipstick .264 .223 .102 ceo, warren -.240 -.160 -.123 Unigrams Perc Real Cont Unigrams Perc Real Cont love,my,so,!,you,I,her,hair,feel,today, .339 .259 .156 game,the,sports,against,football,teams, -.270 -.236 -.130 friends,baby,cute,girls,beautiful,me,heart, −→ player,fans,report,team,ebola,vs,nba,games, −→ little,shopping,happy,because,wonderful, economy,score,government,ceo,americans, gorgeous,bed,clothes,am,have,yay,your .179 .081 .071 goals,app,penalties,play,shit,political,war -.117 -.062 -.065 Emotion Perc Real Cont Emotion Perc Real Cont Joy .255 .245 .091 Anger -.156 -.117 -.076 Fear -.183 -.145 -.084 Table 3: Textual features highlighting errors in human perception of gender compared to ground truth labels. Table shows correlation to perceived gender expression (Perc), to ground truth (Real) and to perceived gender expression controlled for ground truth (Cont). All correlations of gender unigrams, topics and emotions are statistically significant at p < .001 (t-test) Gender – High Confidence Gender – Low Confidence Topic Conf Real Cont Topic Conf Real Cont sibling,flirted,married,husband,wife (.028) (.071) .240 wiser,easier,shittier,happier,worse -.277 (.081) -.295 fellaz,boyss,dayz,girlz,gurlz,sistas (.118) (.113) .221 agenda,planning,activities,schedule -.285 (.020) -.289 brother, mom, daddy, daughter, sister (.127) .241 .214 horoscope,zodiac,gemini,taurus,virgo -.269 (.087) -.288 bathroom,wardrobe,toilet,clothes,bath (.017) .220 .212 reshape,enable,innovate,enhance,create -.253 (-.110) -.235 looked,winked,smiled,lol’d,yell,stare (.035) (.089) .201 imperfect,emotional,break-down,commit -.227 .024 -.232 hair, blonde, shave, eyebrows, dye .163 .182 .199 major,brief,outlined,indicates,wrt -.234 (-.045) -.226 pyjama,shirt,coat,hoody,trousers (.077) (-.010) .191 justification,circumstance,boundaries -.224 (-.014) -.221 awake, dream, sleep, asleep, nights .160 (.132) .184 experiencing, explanations, expressive -.225 (-.039) -.217 totally, awesome, favorite, love, fave (.063) (.135) . 183 inferiority,sufficiently,adequately -.209 (-.015) -.206 days,minutes,seconds,years,months (.087) (-.013) .177 specified,negotiable,exploratory,expert -.190 (-.014) -.187 baldy,gangster,boy,kid,skater,dude (.071) (.027) .173 multiple,desirable,extensive,increasingly -.199 (-.092) -.183 shopping,grocery,ikea,manicure (.052) .204 .173 anticipate,optimist,unrealistic,exceed (.053) (.023) -.182 happy,birthdayyyy,happyyyy,bday .180 .222 .172 organisation,communication,corporate -.200 -.148 -.175 girl, lucky, she’s, you’re, he’s (.118) (.060) .172 hostile,choppy,chaotic,cautious,neutral -.178 (-.033) -.172 worst,happiest,maddest,slowest,funniest .173 (.113) .172 security, transition, administration, supports .185 (-.079) -.170 bazillion,shitload,nonstop,spent,aand .162 (.084) .167 diminished,unemployment,rapidly -.181 (-.101) -.163 Emotion Conf Real Cont Emotion Conf Real Cont Joy .202 .245 .164 – Anticipation .140 (.086) .124 Unigrams Conf Real Cont Unigrams Conf Real Cont I,my,this,was,me,so,had,like, .312 .267 .360 more,may,might,although, .290 .081 .310 her,night,she,just,hair,gonna, −→ emotional,your,eager,url, −→ ever,last,shirt, desires,relationship,seem,existing, kid,girls,love (.076) (.047) .160 emotions,surface,practical,source .150 -.014 .180 Table 4: Textual features highlighting high and low confidence in human perception of gender. Table shows correlation to average self-reported confidence (Conf), to ground truth (Real) and with self-reported confidence controlled for ground truth (Cont). All correlations of gender unigrams, topics and emotions are statistically significant at p < .001 (t-test), except of the values in brackets. 849 Perceived – Older Perceived – Younger Topic Perc Real Cont Topic Perc Real Cont golf, sport, semi, racing .278 (.085) .226 she’s, youre, hes, lucky, girl, slut -.328 -.243 -.184 bill, union, gov, labor, cuts .349 .287 .181 boys, girls, hella, homies, ya’ll -.297 -.236 -.155 states, public, towns, area, employees, immigrants .301 .213 .173 dumb, petty, weak, lame, bc, corny -.295 -.232 -.155 roger, stanley, captain .232 (.105) .167 miss, doing, chilling, how’s -.305 -.268 -.145 available, service, apply, package, customer .279 .197 .160 heart, cry, smile, deep, hug -.258 -.186 -.144 serving, prime, serve, served, freeze .215 (.097) .154 friend, bestfriend, boyfriend, bff, bestest -.281 -.254 -.127 support, leaders, group, youth, educate .228 .121 .153 ugly, stubborn, bein, rude, childish, greedy -.238 -.182 -.126 hillary, clinton, obama, president, scott, ed, sarah .289 .230 .150 bitch, fuck, hoe, dick, slap, suck -.278 -.251 -.125 via, daily, press, latest, report, globe .311 .272 .149 kinda, annoying, weird, silly, emo, retarded, random -.242 -.193 -.124 diverse, developed, multiple, among, several, highly .266 .195 .147 everyone, everything, nothing, does, anyone, else -.201 -.218 -.118 military, terrorist, citizens, iraq, refugees .287 .235 .146 bruh, aye, fam, doin, yoo, dawg -.227 -.178 -.117 julia, emma, annie, claire .180 (.056) .145 ever, cutest, worst, weirdest, biggest, happiest -.275 -.264 -.115 liberty, pacific, north, eastern, 2020 .260 .198 .139 seriously, crazy, bad, shitty, yikes, insane -.208 -.152 -.114 brooklyn, nyc, downtown, philly, hometown .213 .120 .139 whoops, oops, remembered, forgot -.179 (-.104) -.113 Unigrams Perc Real Cont Unigrams Perc Real Cont golf, our, end, delay, favourite, low, holes, original, .321 .(063) .282 me, i, when, like, you, so, dude, don’t, hate, im, u, -.535 -.489 -.294 branch, the, of, stanley, our, . , story, , , −→ girl, hate, life, my, wanna, literally, −→ forever, exciting, great, what, community, hurricane, r, really, cute, someone, youre, miss, me , want, this for, brands, toward, kids, regarding, upcoming .208 (.101) .145 okay, rt, school, snapchat, shit, crying -.256 (-.051) -.117 Emotion Perc Real Cont Emotion Perc Real Cont Positive .325 .268 .166 Disgust -.177 -.131 -.094 Trust .243 .184 .130 Negative -.104 (-.031) -.084 Anticipation .212 .176 .102 Sadness -.126 -.072 -.081 Anger -.070 (-.009) -.065 Table 5: Textual features highlighting errors in human perception of age compared to ground truth labels. Table shows correlation to perceived age expression (Perc), to ground truth (Real) and to perceived age expression controlled for ground truth (Cont). All correlations of age unigrams, topics and emotions are statistically significant at p < .001 (t-test), except of the values in brackets. Age – High Confidence Age – Low Confidence Topic Conf Real Cont Topic Conf Real Cont school, student, college, teachers, grad, classroom .242 (-.054) .227 mocho, gracias, chicos, corazon, quiero -.195 (-.042) -.207 done, homework, finished, essay, procrastinating .251 -.125 .219 sweepstakes, giveaway, enter, retweet, prize (-.044) -.278 -.134 math, chem, biology, test, study, physics .227 (-.060) .210 injures, shot, penalty, strikes, cyclist, suffered -.149 .153 -.108 cant, can’t, wait, till, believe, afford .226 -.171 .183 final, cup, europa, arsenal, match, league -.135 .107 -.106 tomorrow, friday, saturday, date, starts .175 (-.014) .171 juventus, munich, lyon, bayern, 0-1 (-.101) (-.005) -.103 invitations, prom, attire, wedding, outfit, gowns .172 (.005) .170 castlevania, angels, eagles, demons, flames -.138 .138 (-.101) soexcited, next, week, weekend, summer, graduation .153 (.009) .155 devil, sword, curse, armor, die, obey (-.081) (-.055) (-.097) aaand, after, before, literally, off, left, gettingold .182 (-.103) .154 football, reds, kickoff, derby, pitch, lineup -.125 .106 (-.096) sleepy, work, shifts, longday, exhausted, nap .126 (.064) .144 anime, invader, shock, madoka, dragonball (-.071) (-.080) (-.095) life, daydream, remember, cherish, eternally, reminiscing .200 -.228 .143 paranormal, dragon, alien, zombie, dead (-.099) (.025) (-.092) happyyyy, birthdaaaay, b-day, bday, belated .187 -.173 .142 earthquake, magniture, aftermath, devastating, victims (-.101) (.040) (-.090) Unigrams Conf Real Cont Unigrams Conf Real Cont my, i’m, can’t, i, school, so, to, class, .375 -.350 .314 rt, his, league, epic (-.023) -.320 -.128 semester, college, homework, prom, me, in my, → warriors, ! , → friends, literally, when, exam, nap .180 (.080) .157 vintage -.130 (.071) -.111 Emotion Conf Real Cont Emotion Conf Real Cont Trust (.077) .184 .134 – Joy .125 (.009) .128 Positive (.031) .268 .115 Anticipation (.060) .176 .114 Table 6: Textual features highlighting high and low confidence in human perception of age. Table shows correlation to average self-reported confidence (Conf), to ground truth (Real) and with self-reported confidence controlled for ground truth (Cont). Correlation values of age unigrams, topics and emotions statistically significant at p < .001 (t-test) unless in brackets. labels. Table 3 displays the features with significant correlation to perceived gender expression when controlled for real gender using partial correlation, as well as the standalone correlations with the real gender label and perceived gender expression. Note that all correlations with both males and females have the same sign for both perceived gender and real gender. This highlights that humans are not wrong in using these features to make gender assessments. Rather, these stereotypical associates are overestimated by humans. By analyzing the topics that are still correlated with perception after controlling for ground truth correlation, we see that topics related to sports, politics, business and technology are considered by annotators to be stronger cues for predicting males than they really are. Female perception is dominated by topics and words relating to feelings, 850 shopping, dreaming, housework and beauty. For emotions, joy is perceived to be more associated to females than the data shows, while users expressing more anger and fear are significantly more likely to be perceived as males than the data supports. Our crowdsourcing experiment allowed annotators to self-report their confidence in each choice. This gives us the opportunity to measure which textual features lead to higher self-reported confidence in predicting user traits. Table 4 shows the textual features most correlated with self-reported confidence of the annotators when controlled for ground truth, in order to account for the effect that overall confidence is on average higher for groups of users that are easier to predict (i.e., females in case of gender, younger people in case of age). Annotations are most confident when family relationships or other people are mentioned, which aid them to easily assign a label to a user (e.g., ‘husband’). Other topics leading to high confidence are related to apparel or beauty. Also the presence of joy leads to higher confidence (for predicting females based on the previous result). Low confidence is associated with work related topics or astrology as well as to clusters of general adverbs and verbs and tentatively, to a more formal vocabulary e.g., ‘specified’, ‘negotiable’, ‘exploratory’. Intriguingly, low confidence in predicting gender is also related to unigrams like ‘emotions’, ‘relationship’, ‘emotional’. 7.2 Age Perception Table 5 displays the features most correlated with perceived age – the average of the 9 annotator guesses – when controlled for real age, and the individual correlations to perceived and real age. Again, annotators relied on correct stereotypes, but relied on them more heavily than warranted by data. The results show that the perception of users as being older compared to their biological age, is driven by topics including politics, business and news events. Vocabulary contains somewhat longer words (e.g., ‘regarding’, ‘upcoming’, ‘original’). Additionally, annotators perceived older users to express more positive emotions, trust and anticipation. This is in accordance with psychology research, which showed that both positive emotion (Mather and Carstensen, 2005) and trust (Poulin and Haase, 2015) increase as people get older. The perception of users being younger than their biological age is highly correlated with the use of short and colloquial words, and self-references, such as the personal pronoun ‘I’. Remarkably, the negative sentiment is perceived as more specific of younger users, as well as the negative emotions of disgust, sadness and anger, the later of which is actually uncorrelated to age. Table 6 displays the features with the highest correlation to annotation confidence in predicting age when controlling for the true age, as well as separate correlations to real and perceived age. Annotators appear to be more confident in their guess when the posts display more joy, positive emotion, trust and anticipation words. In terms of topics mentioned, these are more informal, self-referential or related to school or college. Topics leading to lower confidence are either about sports or online contests or are frequently retweets. 8 Conclusions This is the first study to systematically analyze differences between real user traits and traits as perceived from text, here Twitter posts. Overall, participants were generally accurate in guessing a person’s traits supporting earlier research that stereotypical associations are frequently accurate (McCauley, 1995). However, we have demonstrated that humans use stereotypes which lead to systematic biases by comparing their guesses to predictions from statistical models using the bag-ofwords assumption. While qualitatively different, these predictions were shown to offer complimentary information in case of gender, boosting overall accuracy when used jointly. Our experimental design allowed us to directly test which textual cues lead to inaccurate assessments. Correlation analysis showed that aspects of stereotypes associated with errors tended not to be completely wrong but rather poorly applied. Annotators generally exaggerated the diagnostic utility of behaviors that they correctly associated with one group or another. Further, we used the same methodology to analyze self-reported confidence. Follow-up studies can analyze the perception of other user traits such as education level, race or political orientation. Another avenue of future research can look at the annotators’ own traits and how these relate to perception (Flekova et al., 2015). This would allow to uncover demographic or psychological traits that influence the ability to make more accurate judgements. This is particularly useful in offering task requesters a prior over which annotators are expected to perform tasks better. 851 Acknowledgments The authors acknowledge the support from Templeton Religion Trust, grant TRT-0048. The work of the first author was further supported by the German Research Foundation under grant No. GU 798/14-1 and the German Federal Ministry of Education and Research (BMBF) under the promotional reference 01-S12054. We wish to thank Prof. Iryna Gurevych for supporting the collaboration, the Mechanical Turk annotators for their diligence and the reviewers for their thoughtful comments. References David Bamman, Jacob Eisenstein, and Tyler Schnoebelen. 2014. Gender Identity and Lexical Variation in Social Media. Journal of Sociolinguistics, 18(2):135–160. Joseph Bates. 1994. The Role of Emotion in Believable Agents. Communications of the ACM, 37(7):122–125. Amy L Baylor and Yanghee Kim. 2004. Pedagogical Agent Design: The Impact of Agent Realism, Gender, Ethnicity, and Instructional Role. In Intelligent Tutoring Systems, volume 3220, pages 592–603. Philip Bramsen, Martha Escobar-Molano, Ami Patel, and Rafael Alonso. 2011. Extracting Social Power Relationships from Natural Language. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, ACL, pages 773– 782. Matthias Braunhofer, Mehdi Elahi, and Francesco Ricci. 2015. User personality and the new user problem in a context-aware point of interest recommender system. In Information and Communication Technologies in Tourism, pages 537–549. D. John Burger, John Henderson, George Kim, and Guido Zarrella. 2011. Discriminating Gender on Twitter. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP, pages 1301–1309. Subhagata Chattopadhyay, Preetisha Kaur, Fethi Rabhi, and Rajendra Acharya. 2011. An Automated System to Diagnose the Severity of Adult Depression. In Second International Conference on Emerging Applications of Information Technology, EAIT, pages 121–124. Glen Coppersmith, Mark Dredze, and Craig Harman. 2014. Quantifying Mental Health Signals in Twitter. In Proceedings of the Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality, ACL, pages 51–60. Aron Culotta, Nirmal Kumar Ravi, and Jennifer Cutler. 2015. Predicting the Demographics of Twitter Users from Website Traffic Data. In Proceedings of the 9th International AAAI Conference on Weblogs and Social Media, ICWSM, pages 72–78. Munmun De Choudhury, Scott Counts, and Eric Horvitz. 2013. Social Media as a Measurement Tool of Depression in Populations. In Proceedings of the 5th Annual ACM Web Science Conference, pages 47–56. John F Dovidio, John C Brigham, Blair T Johnson, and Samuel L Gaertner. 1996. Stereotyping, Prejudice, and Discrimination: Another Look. Stereotypes and Stereotyping, 276:319. Alice H Eagly. 1995. The Science and Politics of Comparing Women and Men. American Psychologist, 50(3):145–158. Jacob Eisenstein, Brendan O’Connor, Noah A. Smith, and Eric P. Xing. 2010. A Latent Variable Model for Geographic Lexical Variation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP, pages 1277–1287. Lucie Flekova and Iryna Gurevych. 2013. Can We Hide in the Web? Large Scale Simultaneous Age and Gender Author Profiling in Social Media - Notebook for PAN at CLEF 2013. In CLEF 2013 Labs and Workshops - Online Working Notes. Lucie Flekova, Daniel Preot¸iuc-Pietro, Jordan Carpenter, Salvatore Giorgi, and Lyle Ungar. 2015. Analyzing Crowdsourced Assessment of User Traits through Twitter Posts. In Third AAAI Conference on Human Computation and Crowdsourcing, HCOMP. Lucie Flekova, Lyle Ungar, and Daniel PreoctiucPietro. 2016. Exploring Stylistic Variation with Age and Income on Twitter. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL. Goldine C. Gleser, Louis A. Gottschalk, and Watkins John. 1959. The Relationship of Sex and Intelligence to Choice of Words: A Normative Study of Verbal Behavior. Journal of Clinical Psychology, 15(2):182–191. Dirk Hovy and Anders Søgaard. 2015. Tagging Performance Correlates with Author Age. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, ACL, pages 483–488. Anders Johannsen, Dirk Hovy, and Anders Søgaard. 2015. Cross-lingual Syntactic Variation over Age and Gender. In Proceedings of the 19th Conference on Computational Language Learning, CONNL, pages 103–112. Oliver P John and Richard W Robins. 1994. Accuracy and Bias in Self-Perception: Individual Differences in Self-enhancement and the Role of Narcissism. Journal of Personality and Social Psychology, 66(1):206–219. 852 Leda E. Kanellakos. 2002. Formal Vocabulary as a Status Cue: Interactions with Diffuse Status Characteristics. David A. Kenny and Linda Albright. 1987. Accuracy in Interpersonal Perception: A Social Relations Analysis. Psychological Bulletin, 102(3):390–402. Diane Kobrynowicz and Nyla R Branscombe. 1997. Who Considers themselves Victims of Discrimination?: Individual Difference Predictors of Perceived Gender Discrimination in Women and Men. Psychology of Women Quarterly, 21(3):347–363. Michal Kosinski, David Stillwell, and Thore Graepel. 2013. Private Traits and Attributes are Predictable from Digital Records of Human Behavior. Proceedings of the National Academy of Sciences, 110(15):5802–5805. Vasileios Lampos, Nikolaos Aletras, Daniel Preot¸iucPietro, and Trevor Cohn. 2014. Predicting and Characterising User Impact on Twitter. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, EACL, pages 405–413. Vasileios Lampos, Nikolaos Aletras, Jens K. Geyti, Bin Zou, and Ingemar J. Cox. 2016. Inferring the Socioeconomic Status of Social Media Users based on Behaviour and Language. In Proceedings of the 38th European Conference on Information Retrieval, ECIR, pages 689–695. A Bryan Loyall and Joseph Bates. 1997. Personalityrich believable agents that use language. In First International Conference on Autonomous Agents, AGENTS, pages 106–113. Marco Lui and Timothy Baldwin. 2012. Langid.Py: An Off-the-shelf Language Identification Tool. In Proceedings of the ACL 2012 System Demonstrations, ACL, pages 25–30. Mara Mather and Laura L Carstensen. 2005. Aging and Motivated Cognition: The Positivity Effect in Attention and Memory. Trends in Cognitive Sciences, 9(10):496–502. Clark R. McCauley. 1995. Are Stereotypes Exaggerated? A Sampling of Racial, Gender, Academic, Occupational, and Political Stereotypes. Stereotype accuracy: Toward appreciating group differences, pages 215–243. Allen R McConnell and Russell H Fazio. 1996. Women as Men and People: Effects of Gendermarked Language. Personality and Social Psychology Bulletin, 22(10). Julie R. McMillan, A. Kay Clifton, Diane McGrath, and Wanda S. Gale. 1977. Women’s language: Uncertainty or interpersonal sensitivity and emotionality? Sex Roles, 3(6):545–559. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. In Proceedings of Workshop at the International Conference on Learning Representations, ICLR, pages 1–12. Saif M. Mohammad and Peter D. Turney. 2013. Crowdsourcing a Word-Emotion Association Lexicon. Computational Intelligence, 29(3):436–465. Anthony Mulac, Lisa B. Studley, and Sheridan Blau. 1990. The Gender-linked Language Effect in Primary and Secondary Students’ Impromptu Essays. Sex Roles, 23(9-10):439–470. Matthew L Newman, Carla J Groom, Lori D Handelman, and James W Pennebaker. 2008. Gender Differences in Language Use: An Analysis of 14,000 Text Samples. Discourse Processes, 45(3):211–236. Dong Nguyen, Rilana Gravel, Dolf Trieschnigg, and Theo Meder. 2013. ‘How Old do you Think I am?’; A Study of Language and Age in Twitter. In Proceedings of the 7th International AAAI Conference on Weblogs and Social Media, ICWSM, pages 439– 448. Dong-Phuong Nguyen, RB Trieschnigg, AS Do˘gru¨oz, Rilana Gravel, Mari¨et Theune, Theo Meder, and FMG de Jong. 2014. Why Gender and Age Prediction from Tweets is Hard: Lessons from a Crowdsourcing Experiment. In Proceedings of the 25th International Conference on Computational Linguistics, COLING, pages 1950–1961. James W. Pennebaker, Martha E. Francis, and Roger J. Booth. 2001. Linguistic Inquiry and Word Count. James W. Pennebaker, Matthias R. Mehl, and Kate G. Niederhoffer. 2003. Psychological Aspects of Natural Language Use: Our Words, our Selves. Annual Review of Psychology, 54(1):547–577. Michael Poulin and Claudia Haase. 2015. Growing to Trust. Evidence That Trust Increases and Sustains Well-Being Across the Life Span. Social Psychological and Personality Science, 6(6):614–621. Daniel Preot¸iuc-Pietro, Johannes Eichstaedt, Gregory Park, Maarten Sap, Laura Smith, Victoria Tobolsky, H Andrew Schwartz, and Lyle H Ungar. 2015a. The Role of Personality, Age and Gender in Tweeting about Mental Illnesses. In Proceedings of the Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality, NAACL, pages 21–30. Daniel Preot¸iuc-Pietro, Vasileios Lampos, and Nikolaos Aletras. 2015b. An Analysis of the User Occupational Class through Twitter Content. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, ACL, pages 1754–1764. 853 Daniel Preot¸iuc-Pietro, Svitlana Volkova, Vasileios Lampos, Yoram Bachrach, and Nikolaos Aletras. 2015c. Studying User Income through Language, Behaviour and Affect in Social Media. PLoS ONE, 10(9). Daniel Preot¸iuc-Pietro, Wei Xu, and Lyle Ungar. 2016. Discovering User Attribute Stylistic Differences via Paraphrasing. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI, pages 3030–3037. Kimberly Quinn and Neil Macrae. 2005. Categorizing Others: The Dynamics of Person Construal. Journal of Personality and Social Psychology, 88(3):467– 479. Francisco Rangel, Paolo Rosso, Irina Chugur, Martin Potthast, Martin Trenkmann, Benno Stein, Ben Verhoeven, and Walter Daelemans. 2014. Overview of the 2nd Author Profiling Task at PAN 2014. In Proceedings of the Conference and Labs of the Evaluation Forum (Working Notes), CLEF. Francisco Rangel, Paolo Rosso, Martin Potthast, Benno Stein, and Walter Daelemans. 2015. Overview of the 3rd Author Profiling Task at PAN 2015. In Proceedings of the Conference and Labs of the Evaluation Forum (Working Notes), CLEF. Delip Rao, David Yarowsky, Abhishek Shreevats, and Manaswi Gupta. 2010. Classifying Latent User Attributes in Twitter. In Proceedings of the 2nd International Workshop on Search and Mining Usergenerated Contents, SMUC, pages 37–44. Marta Recasens, Cristian Danescu-Niculescu-Mizil, and Dan Jurafsky. 2013. Linguistic Models for Analyzing and Detecting Biased Language. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1650–1659. Rinat B Rosenberg-Kima, Amy L Baylor, E Ashby Plant, and Celeste E Doerr. 2008. Interface Agents as Social Models for Female Students: The Effects of Agent Visual Presence and Appearance on Female Students’ Attitudes and Beliefs. Computers in Human Behavior, 24(6):2741–2756. Sara Rosenthal and Kathleen McKeown. 2011. Age Prediction in Blogs: A Study of Style, Content, and Online Behavior in pre- and post-Social Media Generations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, ACL, pages 763–772. Maarten Sap, Gregory Park, Johannes Eichstaedt, Margaret Kern, Lyle Ungar, and H Andrew Schwartz. 2014. Developing Age and Gender Predictive Lexica over Social Media. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP, pages 1146–1151. Jonathan Schler, Moshe Koppel, Shlomo Argamon, and James W Pennebaker. 2006. Effects of Age and Gender on Blogging. AAAI Spring Symposium, pages 199–205. H Andrew Schwartz, Johannes C Eichstaedt, Margaret L Kern, Lukasz Dziurzynski, Stephanie M Ramones, Megha Agrawal, Achal Shah, Michal Kosinski, David Stillwell, Martin EP Seligman, and Lyle H Ungar. 2013. Personality, Gender, and Age in the Language of Social Media: The OpenVocabulary Approach. PLoS ONE, 8(9). Suranga Seneviratne, Aruna Seneviratne, Prasant Mohapatra, and Anirban Mahanti. 2015. Your Installed Apps Reveal your Gender and More! ACM SIGMOBILE Mobile Computing and Communications Review, 18(3):55–61. Jianbo Shi and Jitendra Malik. 2000. Normalized Cuts and Image Segmentation. Transactions on Pattern Analysis and Machine Intelligence, 22(8):888–905. Patrick E. Shrout and Joseph L. Fleiss. 1979. Intraclass Correlations: Uses in Assessing Rater Reliability. Psychological Bulletin, 86:420–428. Robert Tibshirani. 1996. Regression Shrinkage and Selection via the Lasso. Journal of the Royal Statistical Society. Series B (Methodological), 58:267– 288. Svitlana Volkova and Yoram Bachrach. 2015. On Predicting Sociodemographic Traits and Emotions from Communications in Social Networks and Their Implications to Online Self-Disclosure. Cyberpsychology, Behavior, and Social Networking, 18(12):726– 736. Svitlana Volkova and Yoram Bachrach. 2016. Inferring Perceived Demographics from User Emotional Tone and User-Environment Emotional Contrast. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL. Svitlana Volkova, Glen Coppersmith, and Benjamin Van Durme. 2014. Inferring User Political Preferences from Streaming Communications. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL, pages 186–196. Svitlana Volkova, Yoram Bachrach, Michael Armstrong, and Vijay Sharma. 2015. Inferring Latent User Properties from Texts Published in Social Media. In Proceedings of the Twenty-Ninth Conference on Artificial Intelligence (Demo), AAAI, pages 4296–4297. Wu Youyou, Michal Kosinski, and David Stillwell. 2015. Computer-based Personality Judgments are more Accurate than those Made by Humans. PNAS, 112(4):1036–1040. 854
2016
80
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 855–865, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Modeling Social Norms Evolution for Personalized Sentiment Classification Lin Gong1, Mohammad Al Boni2, Hongning Wang1 1Department of Computer Science, 2Department of System and Information Engineering University of Virginia, Charlottesville VA, 22904 USA {lg5bt, ma2sm, hw5x}@virginia.edu Abstract Motivated by the findings in social science that people’s opinions are diverse and variable while together they are shaped by evolving social norms, we perform personalized sentiment classification via shared model adaptation over time. In our proposed solution, a global sentiment model is constantly updated to capture the homogeneity in which users express opinions, while personalized models are simultaneously adapted from the global model to recognize the heterogeneity of opinions from individuals. Global model sharing alleviates data sparsity issue, and individualized model adaptation enables efficient online model learning. Extensive experimentations are performed on two large review collections from Amazon and Yelp, and encouraging performance gain is achieved against several state-of-the-art transfer learning and multi-task learning based sentiment classification solutions. 1 Introduction Sentiment is personal; the same sentiment can be expressed in various ways and the same expression might carry distinct polarities across different individuals (Wiebe et al., 2005). Current mainstream solutions of sentiment analysis overlook this fact by focusing on population-level models (Liu, 2012; Pang and Lee, 2008). But the idiosyncratic and variable ways in which individuals communicate their opinions make a global sentiment classifier incompetent and consequently lead to suboptimal opinion mining results. For instance, a shared statistical classifier can hardly recognize that in restaurant reviews, the word “expensive” may indicate some users’ satisfaction with a restaurant’s quality, although it is generally associated with negative attitudes. Hence, a personalized sentiment classification solution is required to achieve fine-grained understanding of individuals’ distinctive and dynamic opinions and benefit downstream opinion mining applications. Sparse observations of individuals’ opinionated data (Max, 2014) prevent straightforward solutions from building personalized sentiment classification models, such as estimating supervised classifiers on a per-user basis. Semi-supervised methods are developed to address the data sparsity issue. For example, leveraging auxiliary information from user-user and user-document relations in transductive learning (Hu et al., 2013; Tan et al., 2011). However, only one global model is estimated there, and the details of how individual users express diverse opinions cannot be captured. More importantly, existing solutions build static sentiment models on historic data; but the means in which a user expresses his/her opinion is changing over time. To capture temporal dynamics in a user’s opinions with existing solutions, repeated model reconstruction is unavoidable, albeit it is prohibitively expensive. As a result, personalized sentiment analysis requires effective exploitation of users’ own opinionated data and efficient execution of model updates across all users. To address these challenges, we propose to build personalized sentiment classification models via shared model adaptation. Our solution roots in the social psychology theories about humans’ dispositional tendencies (Briley et al., 2000). Humans’ behaviors are shaped by social norms, a set of socially shared “feelings” and “display rules” about how one should feel and express opinions (Bars¨ade and Gibson, 1998; Sherif, 1936). In the context of content-based sentiment classification, we interpret social norms as global model sharing and adaptation across users. Formally, we assume a global sentiment model serves as the basis to capture self-enforcing sentimental regulari855 ties across users, and each individual user tailors the shared model to realize his/her personal preference. In addition, social norms also evolve over time (Ehrlich and Levin, 2005), which leads to shifts in individuals’ behaviors. This can again be interpreted as model adaptation: a new global model is adapted from an existing one to reflect the newly adopted sentimental norms. The temporal changes in individuals’ opinions can be efficiently captured via online model adaptation at the levels of both global and personalized models. Our proposed solution can also be understood from the perspective of multi-task learning (Evgeniou and Pontil, 2004; Jacob et al., 2009). Intuitively, personalized model adaptations can be considered as a set of related tasks in individual users, which contribute to a shared global model adaptation. In particular, we assume the distinct ways in which users express their opinions can be characterized by a linear classifier’s parameters, i.e., the weights of textual features. Personalized models are thus achieved via a series of linear transformations over a globally shared classifier’s parameters (Wang et al., 2013), e.g., shifting and scaling the weight vector. This globally shared classifier itself is obtained via another set of linear transformations over a given base classifier, which can be estimated from an isolated collection beforehand and serves as a prior for shared sentiment classification. The shared global model adaptation makes personalized model estimation no longer independent, such that regularity is formed across individualized learning tasks. We empirically evaluated the proposed solution on two large collections of reviews, i.e., Amazon and Yelp reviews. Extensive experiment results confirm its effectiveness: the proposed method outperformed user-independent classification methods, several state-of-the-art model adaption methods, and multi-task learning algorithms. 2 Related Work Text-based sentiment classification forms the foundation of sentiment analysis (Liu, 2012; Pang and Lee, 2008). There are two typical types of studies in sentiment classification. The first is classifying input text units (such as documents, sentences and phrases) into predefined categories, e.g., positive v.s., negative (Pang et al., 2002; Gao et al., 2014) and multiple classes (Pang and Lee, 2005). Both lexicon-based and learningbased solutions have been explored. The second is identifying topical aspects and corresponding opinions, e.g., developing topic models to predict fine-grained aspect ratings (Titov and McDonald, 2008; Wang et al., 2011). However, all those works emphasize population-level analysis, which applies a global model on all users and therefore fails to recognize the heterogeneity in which different users express their diverse opinions. Our proposed solution is closely related to multi-task learning, which exploits the relatedness among multiple learning tasks to benefit each single task. Tasks can be related in various ways. A typical assumption is that all learnt models are close to each other in some matrix norms (Evgeniou and Pontil, 2004; Jacob et al., 2009). This has been empirically proved to be effective for capturing preferences of individual users (Evgeniou et al., 2007). Task relatedness has also been imposed via constructing a common underlying representation across different tasks (Argyriou et al., 2008; Evgeniou and Pontil, 2007). Our solution postulates task relatedness via a two-level model adaptation procedure. The global model adaptation accounts for the homogeneity and shared dynamics in users’ opinions; and personalized model adaptation realizes heterogeneity in individual users. The idea of model adaptation has been extensively explored in the context of transfer learning (Pan and Yang, 2010), which focuses on applying knowledge gained while solving one problem to different but related problems. In opinion mining community, transfer learning is mostly exploited for domain adaptation, e.g., adapting sentiment classifiers trained on book reviews to DVD reviews (Blitzer et al., 2006; Pan et al., 2010). Personalized model adaptation has also been studied in literature. The idea of linear transformation based model adaptation is introduced in (Wang et al., 2013) for personalized web search. Al Boni et al. applied a similar idea to achieve personalized sentiment classification (Al Boni et al., 2015). (Li et al., 2010) developed an online learning algorithm to continue training personalized classifiers based on a given global model. However, all of these aforementioned solutions perform model adaptation from a fixed global model, such that the learning of personalized models is independent from each other. Data sparsity again is the major bottleneck for such solutions. Our solution associates individual model adaptation via a shared global model adaptation, which leverages observations across users and thus reduces preference learning complexity. 856 3 Methodology We propose to build personalized sentiment classifiers via shared model adaptation for both a global sentiment model and individualized models. Our solution roots in the social psychology theories about humans’ dispositional tendencies, e.g., social norms and the evolution of social norms over time. In the following discussions, we will first briefly discuss the social theories that motivate our research, and then carefully describe the model assumptions and technical details about the proposed personalized model adaptation solution. 3.1 The Evolution of Social Norms Social norms create pressures to establish socialization of affective experience and expression (Shott, 1979). Within the limit set by social norms and internal stimuli, individuals construct their sentiment, which is not automatic, physiological consequences but complex consequences of learning, interpretation, and social influence. This motivates us to build a global sentiment classification model to capture the shared basis on which users express their opinions. For example, the phrase “a waste of money” generally represents negative opinions across all users; and it is very unlikely that anybody would use it in a positive sense. On the other hand, members of some segments of a social structure tend to feel certain emotions more often or more intensely than members of other segments (Hochschild, 1975). Personalized model adaptation from the shared global model becomes necessary to capture the variability in affective expressions across users. For example, the word “expensive” may indicate some users’ satisfaction with their received service. Studies in social psychology also suggest that social norms shift and spread through infectious transfer mediated by webs of contact and influence over time (Ostrom, 2014; Ehrlich and Levin, 2005). Members inside a social structure influence the other members; confirmation of shifted beliefs leads to the development and evolution of social norms, which in turn regulate the shared social behaviors as a whole over time. The evolving nature of social norms urges us to take a dynamic view of the shared global sentiment model: instead of treating it as fixed, we further assume this model is also adapted from a predefined one, which serves as prior for sentiment classification. All individual users are coupled and contribute to this shared global model adaptation. This twolevel model adaptation assumption leads us to the proposed multi-task learning solution, which will be carefully discussed in the next section. 3.2 Shared Linear Model Adaptation In this paper, we focus on linear models for personalized sentiment classification due to their empirically superior performance in text-based sentiment analysis (Pang et al., 2002; Pang and Lee, 2005). We assume the diverse ways in which users express their opinions can be characterized by different settings of a linear model’s parameters, i.e., the weights of textual features. Formally, we denote a given set of opinionated text documents from user u as Du = {(xu d, yu d)}|Du| d=1 , where each document xu d is represented by a V -dimensional vector of textual features and yu d is the corresponding sentiment label. The task of personalized sentiment classification is to estimate a personalized model y = fu(x) for user u, such that fu(x) best captures u’s opinions in his/her generated text content. Instead of assuming fu(x) is solely estimated from user u’s own opinionated data, which is prone to overfitting, we assume it is derived from a globally shared sentiment model fs(x) via model adaptation (Al Boni et al., 2015; Wang et al., 2013), i.e., shifting and scaling fs(x)’s parameters for each individual user. To simplify the following discussions, we will focus on binary classification, i.e., yd ∈{0, 1}, and use the logistic regression as our reference model. But the developed techniques are general and can be easily extended to multi-class classification and generalized linear models. We only consider scaling and shifting operations, given rotation requires to estimate much more free parameters (i.e., O(V 2) v.s., O(V )) but contributes less in final classification performance (Al Boni et al., 2015). We further assume the adaptations can be performed in a group-wise manner (Wang et al., 2013): features in the same group will be updated synchronously by enforcing the same shifting and scaling operations. This enables the observations from seen features to be propagated to unseen features in the same group during adaptation. Various feature grouping methods have been explored in (Wang et al., 2013). Specifically, we define g(i) →j as a feature grouping method, which maps feature i in {1, 2, . . . , V } to feature group j in {1, 2, . . . , K}. A personalized model adaptation matrix can then be represented as a 2K-dimensional vector Au = (au 1, au 2, . . . , au K, bu 1, bu 2, . . . , bu K), where au k and bu k 857 represent the scaling and shifting operations in feature group k for user u accordingly. Plugging this group-wise model adaptation into the logistic function, we can get a personalized logistic regression model P u(yd = 1|xd) for user u as follows, P u(yd = 1|xd) = 1 1 + e−PK k=1 P g(i)=k (au k ws i +bu k )xi (1) where ws is the feature weight vector in the global model fs(x). As a result, personalized model adaptation boils down to identifying the optimal model transformation operation Au for each user based on ws and Du. In (Al Boni et al., 2015; Wang et al., 2013), fs(x) is assumed to be given and fixed. It leads to isolated estimation of personalized models. Based on the social norms evolution theory, fs(x) should also be dynamic and ever-changing to reflect shifted social norms. Hence, we impose another layer of model adaptation on top of the shared global sentiment model fs(x), by assuming itself is also adapted from a predefined base sentiment model. Denote this base classifier as f0(x), which is parameterized by a feature weight vector w0 and serves as a prior for sentiment classification. Then ws can be derived via the same aforementioned model adaptation procedure: ws = As ˜w0, where ˜w0 is an augmented vector of w0, i.e., ˜w0 = (w0, 1), to facilitate shifting operations, and As is the adaptation matrix for the shared global model. We should note As can take a different configuration (i.e., feature groupings) from individual users’ adaptation matrices. Putting these two levels of model adaptation together, a personalized sentiment classifier is achieved via, wu = AuAs ˜w0 (2) which can then be plugged into Eq (1) for personalized sentiment classification. We name this resulting algorithm as MutliTask Linear Model Adaptation, or MT-LinAdapt in short. The benefits of shared model adaptation defined in Eq (2) are three folds. First, the homogeneity in which users express their diverse opinions are captured in the jointly estimated sentiment model fs(x) across users. Second, the learnt individual models are coupled together to reduce preference learning complexity, i.e., they collaboratively serve to reduce the models’ overall prediction error. Third, non-linearity is achieved via the two-level model adaptation, which introduces more flexibility in capturing heterogeneity in different users’ opinions. In-depth discussions of those unique benefits will be provided when we introduce the detailed model estimation methods. 3.3 Joint Model Estimation The ideal personalized model adaptation should be able to adjust the individualized classifier fu(x) to minimize misclassification rate on each user’s historical data in Du. In the meanwhile, the shared sentiment model fs(x) should serve as the basis for each individual user to reduce the prediction error, i.e., capture the homogeneity. These two related objectives can be unified under a joint optimization problem. In logistic regression, the optimal adaptation matrix Au for an individual user u, together with As can be retrieved by a maximum likelihood estimator (i.e., minimizing logistic loss on a user’s own opinionated data). The log-likelihood function in each individual user is defined as, L(Au, As) = |Du| X d=1 h yd log P u(yd = 1|xd) (3) + (1 −yd) log P u(yd = 0|xd) i To avoid overfitting, we penalize the transformations which increase the discrepancy between the adapted model and its source model (i.e., between wu and ws, and between ws and w0) via a L2 regularization term, R(A) = η1 2 ||a −1||2 + η2 2 ||b||2 (4) and it enforces scaling to be close to one and shifting to be close to zero. By defining a new model adaptation matrix ˚ A = {Au1, Au2, . . . , AuN , As} to include all unknown model adaptation parameters for individual users and shared global model, we can formalize the joint optimization problem in MT-LinAdapt as, max L(˚ A)= N X i=1 h L(Aui)−R(Aui) i −R(As) (5) which can be efficiently solved by a gradientbased optimizer, such as quasi-Newton method (Zhu et al., 1997). Direct optimization over ˚ A requires synchronization among all the users. But in practice, users will generate their opinionated data with different paces, such that we have to postpone model adaptation until all the users have at least one observation to update their own adaptation matrix. 858 This delayed model update is at high risk of missing track of active users’ recent opinion changes, but timely prediction of users’ sentiment is always preferred. To monitor users’ sentiment in realtime, we can also estimate MT-LinAdapt in an asynchronized manner: whenever there is a new observation available, we update the corresponding user’s personalized model together with the shared global model immediately. i.e., online optimization of MT-LinAdapt. This asychronized estimation of MT-LinAdapt reveals the insight of our two-level model adaptation solution: the immediate observations in user u will not only be used to update his/her own adaptation parameters in Au, but also be utilized to update the shared global model, thus to influence the other users, who do not have adaptation data yet. Two types of competing force drive the adaptation among all the users: ws = As ˜w0 requires timely update of global model across users; and wu = Auws enforces the individual user to conform to the newly updated global model. This effect can be better understood with the actual gradients used in this asychronized update. We illustrate the decomposed gradients for scaling operation in Au and As from the log-likelihood part in Eq (5) on a specific adaptation instance (xu d, yu d): ∂L(Au,As) ∂au k =∆u d X gu(i)=k  as gs(i)w0 i +bs gs(i)  xu di (6) ∂L(Au,As) ∂as l =∆u d X gs(i)=l au gu(i)w0 i xu di (7) where ∆u d = yu d −P u(yu d = 1|xu d), and gu(·) and gs(·) are feature grouping functions in individual user u and shared global model fs(x). As stated in Eq (6) and (7), the update of scaling operation in the shared global model and individual users depends on each other; the gradient with respect to global model adaptation will be accumulated among all the users. As a result, all users are coupled together via the global model adaptation in MT-LinAdapt, such that model update is propagated through users to alleviate data sparsity issue in each single user. This achieves the effect of multi-task learning. The same conclusion also applies to the shifting operations. It is meaningful for us to compare our proposed MT-LinAdapt algorithm with those discussed in the related work section. Different from the model adaptation based personalized sentiment classification solution proposed in (Al Boni et al., 2015), which treats the global model as fixed, MT-LinAdapt adapts the global model to capture the evolving nature of social norms. As a result, in (Al Boni et al., 2015) the individualized model adaptations are independent from each other; but in MT-LinAdapt, the individual learning tasks are coupled together to enable observation sharing across tasks, i.e., multi-task learning. Additionally, as illustrated in Eq (6) and (7), nonlinear model adaptation is achieved in MT-LinAdapt because of the different feature groupings in individual users and global model. This enables observations sharing across different feature groups, while in (Al Boni et al., 2015) observations can only be shared within the same feature group, i.e., linear model adaptation. Multi-task SVM introduced in (Evgeniou and Pontil, 2004) can be considered as a special case of MT-LinAdapt. In Multi-task SVM, only shifting operation is considered in individual users and the global model is simply estimated from the pooled observations across users. Therefore, only linear model adaptation is achieved in Multi-task SVM and it cannot leverage prior knowledge conveyed in a predefined sentiment model. 4 Experiments In this section, we perform empirical evaluations of the proposed MT-LinAdapt model. We verified the effectiveness of different feature groupings in individual users’ and shared global model adaptation by comparing our solution with several stateof-the-art transfer learning and multi-task learning solutions for personalized sentiment classification, together with some qualitative studies to demonstrate how our model recognizes users’ distinct expressions of sentiment. 4.1 Experiment Setup • Datesets. We evaluated the proposed model on two large collections of review documents, i.e., Amazon product reviews (McAuley et al., 2015) and Yelp restaurant reviews (Yelp, 2016). Each review document contains a set of attributes such as author ID, review ID, timestamp, textual content, and an opinion rating in discrete five-star range. We applied the following pre-processing steps on both datasets: 1) filtered duplicated reviews; 2) labeled reviews with overall rating above 3 stars as positive, below 3 stars as negative, and removed the rest; 3) removed reviewers who posted more than 1,000 reviews and those whose positive review ratio is more than 90% or less than 10% 859 (little variance in their opinions and thus easy to classify). Since such users can be easily captured by the base model, the removal emphasizes comparisons on adapted models; 4) sorted each user’s reviews in chronological order. Then, we performed feature selection by taking the union of top unigrams and bigrams ranked by Chi-square and information gain metrics (Yang and Pedersen, 1997), after removing a standard list of stopwords and porter stemming. The final controlled vocabulary consists of 5,000 and 3,071 textual features for Amazon and Yelp datasets respectively; and we adopted TF-IDF as the feature weighting scheme. From the resulting data sets, we randomly sampled 9,760 Amazon reviewers and 11,733 Yelp reviewers for testing purpose. There are 105,472 positive reviews and 37,674 negative reviews in the selected Amazon dataset; 108,105 positive reviews and 32,352 negative reviews in the selected Yelp dataset. • Baselines. We compared the performance of MT-LinAdapt against seven different baselines, ranging from user-independent classifiers to several state-of-the-art model adaption methods and multi-task learning algorithms. Due to space limit, we will briefly discuss the baseline models below. Our solution requires a user-independent classifier as base sentiment model for adaptation. We estimated logistic regression models from a separated collection of reviewers outside the preserved testing data on Amazon and Yelp datasets accordingly. We also included these isolated base models in our comparison and name them as Base. In order to verify the necessity of personalized sentiment models, we trained a global SVM based on the pooled adaptation data from all testing reviewers, and name it as Global SVM. We also estimated an independent SVM model for each single user only based on his/her adaptation reviews, and name it as Individual SVM. We included an instance-based transfer learning method (Brighton and Mellish, 2002), which considers the k-nearest neighbors of each testing review document from the isolated training set for personalized model training. As a result, for each testing case, we estimated an independent classification model, which is denoted as ReTrain. (Geng et al., 2012) used L2 regularization to enforce the adapted models to be close to the global model. We applied this method to get personalized logistic regression models and refer to it as RegLR. LinAdapt developed in (Al Boni et al., 2015) also performs groupwise linear model adaptation to build personalization classifiers. But it isolates model adaptation in individual users. MT-SVM is a multi-task learning method, which encodes task relatedness via a shared linear kernel (Evgeniou and Pontil, 2004). • Evaluation Settings. We evaluated all the models with both synchronized (batch) and asynchronized (online) model update. We should note MTSVM can only be tested in batch mode, because it is prohibitively expensive to retrain SVM repeatedly. In batch evaluation, we split each user’s reviews into two sets: the first 50% for adaptation and the rest 50% for testing. In online evaluation, once we get a new testing instance, we first evaluate the up-to-date personalized classifier against the ground-truth; then use the instance to update the personalized model. To simulate the real-world situation where user reviews arrive sequentially and asynchronously, we ordered all reviews chronologically and accessed them one at a time for online model update. In particular, we utilized stochastic gradient descent for this online optimization (Kiwiel, 2001). Because of the biased class distribution in both datasets, we computed F1 measure for both positive and negative class in each user, and took macro average among users to compare the different models’ performance. 4.2 Effect of Feature Grouping In MT-LinAdapt, different feature groupings can be postulated in individual users’ and shared global model adaptation. Nonlinearity is introduced when different grouping functions are used in these two levels of model adaptation. Therefore, we first investigated the effect of feature grouping in MT-LinAdapt. We adopted the feature grouping method named “cross” in (Wang et al., 2013) to cluster features into different groups. More specifically, we evenly spilt the training collection into N nonoverlapping folds, and train a single SVM model on each fold. Then, we create a V × N matrix by putting the learned weights from N folds together, on which k-means clustering is applied to extract K feature groups. We compared the batch evaluation performance of varied combinations of feature groups in MT-LinAdapt. The experiment results are demonstrated in Table 1; and for comparison purpose, we also included the base classifier’s performance in the table. In Table 1, the two numbers in the first column denote the feature group sizes in personalized models and global model respectively. And all indicates one feature per group (i.e., no fea860 Table 1: Effect of different feature groupings in MT-LinAdapt. Method Amazon Yelp Pos F1 Neg F1 Pos F1 Neg F1 Base 0.8092 0.4871 0.7048 0.3495 400-800 0.8318 0.5047 0.8237 0.4807 400-1600 0.8385 0.5257 0.8309 0.4978 400-all 0.8441 0.5423 0.8345 0.5105 800-800 0.8335 0.5053 0.8245 0.4818 800-1600 0.8386 0.5250 0.8302 0.4962 800-all 0.8443 0.5426 0.8361 0.5122 1600-all 0.8445 0.5424 0.8357 0.5106 all-all 0.8438 0.5416 0.8361 0.5100 ture grouping). The adapted models in MTLinAdapt achieved promising performance improvement against the base sentiment classifier, especially on the Yelp data set. As we increased the feature group size for global model, MTLinAdapt’s performance kept improving; while with the same feature grouping in the shared global model, a moderate size of feature groups in individual users is more advantageous. These observations are expected. Because the global model is shared across users, all their adaptation reviews can be leveraged to adapt the global model so that sparsity is no longer an issue. Since more feature groups in the global model can be afforded, more accurate estimation of adaptation parameters can be achieved. But at the individual user level, data sparsity is still the bottleneck for accurate adaptation estimation, and trade-off between observation sharing and estimation accuracy has to be made. Based on this analysis, we selected 800 and all feature groups for individual models and global model respectively in the following experiments. 4.3 Personalized Sentiment Classification • Synchronized model update. Table 2 demonstrated the classification performance of MTLinAdapt against all baselines on both Amazon and Yelp datasets, where binomial tests on winloss comparison over individual users were performed between the best algorithm and runner-up to verify the significance of performance improvement. We can clearly notice that MT-LinAdapt significantly outperformed all baselines in negative class, and it was only slightly worse than MT-SVM on positive class. More specifically, per-user classifier estimation clearly failed to obtain a usable classifier, due to the sparse observations in single users. Model-adaptation based baselines, i.e., RegLR and LinAdapt, slightly improved over the base model. But because the adaptations across users are isolated and the base model is fixed, their improvement is very limited. As for negative class, MT-LinAdapt outperformed Global SVM significantly on both datesets. Since negative class suffers more from the biased prior distribution, the considerable performance improvement indicates effectiveness of our proposed personalized sentiment classification solution. As for positive class, the performance difference is not significant between MTLinAdapt and MT-SVM on Amazon data set nor between MT-LinAdapt and Global SVM on Yelp data set. By looking into detailed results, we found that MT-LinAdapt outperformed MT-SVM on users with fewer adaptation reviews. Furthermore, though MT-SVM benefits from multi-task learning, it cannot leverage information from the given base classifier. Considering the biased class prior in these two data sets (2.8:1 on Amazon and 3.3:1 on Yelp), the improved classification performance on negative class from MT-LinAdapt is more encouraging. Table 2: Classification results in batch mode. Method Amazon Yelp Pos F1 Neg F1 Pos F1 Neg F1 Base 0.8092 0.4871 0.7048 0.3495 Global SVM 0.8352 0.5403 0.8411 0.5007 Individual SVM 0.5582 0.2418 0.3515 0.3547 ReTrain 0.7843 0.4263 0.7807 0.3729 RegLR 0.8094 0.4896 0.7103 0.3566 LinAdapt 0.8091 0.4894 0.7107 0.3575 MT-SVM 0.8484 0.5367 0.8408 0.5079 MT-LinAdapt 0.8441 0.5422∗ 0.8358 0.5119∗ ∗indicates p-value<0.05 with Binomial test. • Asynchronized model update. In online model estimation, classifiers can benefit from immediate update, which provides a feasible solution for timely sentiment analysis in large datasets. In this setting, only two baseline models are applicable without model reconstruction, i.e., RegLR and LinAdapt. To demonstrate the utility of online update in personalized sentiment models, we illustrate the relative performance gain of these models over the base sentiment model in Figure 1. The xaxis indicates the number of adaptation instances consumed in online update from all users, i.e., the 1st review means after collecting the first review of each user. MT-LinAdapt converged to satisfactory performance with only a handful of observations in each user. LinAdapt also quickly converged, but its performance was very close to the base model, since no observation is shared across users. RegLR needs the most observations to estimate satisfac861 0 2 4 6 8 10 12 14 16 18 # documents -30.0% -25.0% -20.0% -15.0% -10.0% -5.0% 0.0% 5.0% Relative Performance of Pos F1(%) Amazon Dataset RegLR LinAdapt MT-LinAdapt 0 2 4 6 8 10 12 14 16 18 # documents -30.0% -25.0% -20.0% -15.0% -10.0% -5.0% 0.0% 5.0% 10.0% Relative Performance of Neg F1(%) Amazon Dataset RegLR LinAdapt MT-LinAdapt 0 2 4 6 8 10 12 14 16 18 # documents -25.0% -20.0% -15.0% -10.0% -5.0% 0.0% 5.0% 10.0% 15.0% Relative Performance of Pos F1(%) Yelp Dataset RegLR LinAdapt MT-LinAdapt 0 2 4 6 8 10 12 14 16 18 # documents -40.0% -30.0% -20.0% -10.0% 0.0% 10.0% 20.0% 30.0% 40.0% Relative Performance of Neg F1(%) Yelp Dataset RegLR LinAdapt MT-LinAdapt Figure 1: Relative performance gain between MT-LinAdapt and baselines on Amazon and Yelp datasets. tory personalized models. The improvement in MT-LinAdapt demonstrates the benefit of shared model adaptation, which is vital when the individuals’ adaptation data are not immediately available but timely sentiment classification is required. 0 20000 40000 60000 80000 100000 120000 140000 timestamp 0.0 0.2 0.4 0.6 0.8 1.0 F1 Measure posF1 negF1 0 2 4 6 8 10 Euclidean Distance |ws -w0 | |ws -wu | Figure 2: Online model update trace on Amazon. It is meaningful to investigate how the shared global model and personalized models are updated during online learning. The shift in the shared global model reflects changes in social norms, and the discrepancy between the shared global model and personalized models indicates the variances of individuals’ opinions. In particular, we calculated Euclidean distance between global model ws and base model w0 and that between individualized model wu and shared global model ws during online model updating. To visualize the results, we computed and plotted the average Euclidean distances in every 3000 observations during online learning, together with the corresponding variance. To illustrate a comprehensive picture of online model update, we also plotted the corresponding average F1 performance for both positive and negative class. Because the Euclidean distance between ws and w0 is much larger than that between ws and wu, we scaled ||ws −w0|| by 0.02 on Amazon dataset in Figure 2. Similar results were observed on Yelp data as well; but due to space limit, we do not include them. As we can clearly observe that the difference between the base model and newly adapted global model kept increasing during online update. At the earlier stage, it is increasing much faster than the later stage, and the corresponding classification performance improves more rapidly (especially in negative class). The considerably large variance between w0 and ws at the beginning indicates the divergence between old and new social norms across users. Later on, variance decreased and converged with more observations, which can be understood as the formation of the new social norms among users. On the other hand, the distance between personalized models and shared global model fluctuated a lot at the beginning; with more observations, it became stable later on. This is also reflected in the range of variance: the variance is much smaller in later stage than earlier stage, which indicates users comply to the newly established social norms. 862 Table 3: Shared model adaptation for cold start on Amazon and Yelp. Amazon Yelp Obs. Shared-SVM MT-SVM MT-LinAdapt Shared-SVM MT-SVM MT-LinAdapt Pos F1 Neg F1 Pos F1 Neg F1 Pos F1 Neg F1 Pos F1 Neg F1 Pos F1 Neg F1 Pos F1 Neg F1 1st 0.9004 0.7013 0.9264 0.7489 0.9122 0.7598 0.7882 0.5537 0.9040 0.7201 0.8809 0.7306 2nd 0.9200 0.6872 0.9200 0.7319 0.8945 0.7292 0.7702 0.5266 0.8962 0.6959 0.8598 0.6968 3rd 0.9164 0.6967 0.9164 0.7144 0.8967 0.7260 0.7868 0.5278 0.9063 0.7099 0.8708 0.7069 4.4 Shared Adaptation Against Cold Start Cold start refers to the challenge that a statistic model cannot draw any inference for users before sufficient observations are gathered (Schein et al., 2002). The shared model adaptation in MTLinAdapt helps alleviate cold start in personalized sentiment analysis, while individualized model adaptation method, such as RegLR and LinAdapt, cannot achieve so. To verify this aspect, we separated both Amazon and Yelp reviewers into two sets: we randomly selected 1,000 reviewers from the isolated training set and exhausted all their reviews to estimate a shared SVM model, MTLinAdapt and MT-SVM. Then those models were directly applied onto the testing reviewers for evaluation. Again, because it is time consuming to retrain a SVM model repeatedly, only MT-LinAdapt performed online model update in this evaluation. We report the performance on the first three observations from all testing users accordingly in Table 3. MT-LinAdapt achieved promising performance on the first testing cases, especially on the negative class. This indicates its estimated global model is more accurate on the new testing users. Because MT-SVM cannot be updated during this online test, only its previously estimated global model from the 1,000 training users can be applied here. As we can notice, its performance is very similar to the shared SVM model (especially on Amazon dataset). MT-LinAdapt adapts to this new collection of users very quickly, so that improved performance against the static models at later stage is achieved. 4.5 Vocabulary Stability One derivative motivation for personalized sentiment analysis is to study the diverse use of vocabulary across individual users. We analyzed the variance of words’ sentiment polarities estimated in the personalized models against the base model. Table 4 shows the most and the least variable features on both datasets. It is interesting to find that words with strong sentiment polarities tend to be more stable across users, such as “disgust,” “regret,” and “excel.” This demonstrates the sign Table 4: Top six words with the highest and lowest variances of learned polarities by MT-LinAdapt. Amazon Highest cheat healthi enjoy-read astound the-wrong the-amaz Lowest mistak favor excel regret perfect-for great Yelp Highest total-worth lazi was-yummi advis impress so-friend Lowest omg veri-good hungri frustrat disgust a-must of conformation to social norms. There are also words exhibiting high variances in sentiment polarity, such as “was-yummi,” “lazi,” and “cheat,” which indicates the heterogeneity of users’ opinionated expressions. 5 Conclusions In this work, we proposed to perform personalized sentiment classification based on the notion of shared model adaptation, which is motivated by the social theories that humans’ opinions are diverse but shaped by the ever-changing social norms. In the proposed MT-LinAdapt algorithm, global model sharing alleviates data sparsity issue, and individualized model adaptation captures the heterogeneity in humans’ sentiments and enables efficient online model learning. Extensive experiments on two large review collections from Amazon and Yelp confirmed the effectiveness of our proposed solution. The idea of shared model adaptation is general and can be further extended. We currently used a two-level model adaptation scheme. The adaptation can be performed at the user group level, i.e., three-level model adaptation. The user groups can be automatically identified to maximize the effectiveness of shared model adaptation. In addition, this method can also be applied to domain adaptation, where a domain taxonomy enables a hierarchically shared model adaptation. 6 Acknowledgments We thank the anonymous reviewers for their insightful comments. This paper is based upon work supported by the National Science Foundation under grant IIS-1553568. 863 References [Al Boni et al.2015] Mohammad Al Boni, Keira Qi Zhou, Hongning Wang, and Matthew S Gerber. 2015. Model adaptation for personalized opinion analysis. In Proceedings of ACL. [Argyriou et al.2008] Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Pontil. 2008. Convex multi-task feature learning. Machine Learning, 73(3):243–272. [Bars¨ade and Gibson1998] Sigal G Bars¨ade and Donald E Gibson. 1998. Group emotion: A view from top and bottom. Research on managing groups and teams, 1:81–102. [Blitzer et al.2006] John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In Proceedings of the 2006 EMNLP, pages 120–128. ACL. [Brighton and Mellish2002] Henry Brighton and Chris Mellish. 2002. Advances in instance selection for instance-based learning algorithms. Data mining and knowledge discovery, 6(2):153–172. [Briley et al.2000] Donnel A Briley, Michael W Morris, and Itamar Simonson. 2000. Reasons as carriers of culture: Dynamic versus dispositional models of cultural influence on decision making. Journal of consumer research, 27(2):157–178. [Ehrlich and Levin2005] Paul R Ehrlich and Simon A Levin. 2005. The evolution of norms. PLoS Biol, 3(6):e194. [Evgeniou and Pontil2004] Theodoros Evgeniou and Massimiliano Pontil. 2004. Regularized multi–task learning. In Proceedings of the 10th ACM SIGKDD, pages 109–117. ACM. [Evgeniou and Pontil2007] A Evgeniou and Massimiliano Pontil. 2007. Multi-task feature learning. Advances in neural information processing systems, 19:41. [Evgeniou et al.2007] Theodoros Evgeniou, Massimiliano Pontil, and Olivier Toubia. 2007. A convex optimization approach to modeling consumer heterogeneity in conjoint estimation. Marketing Science, 26(6):805–818. [Gao et al.2014] Wenliang Gao, Nobuhiro Kaji, Naoki Yoshinaga, and Masaru Kitsuregawa. 2014. Collective sentiment classification based on user leniency and product popularity. łł, 21(3):541–561. [Geng et al.2012] Bo Geng, Yichen Yang, Chao Xu, and Xian-Sheng Hua. 2012. Ranking model adaptation for domain-specific search. IEEE Transactions on Knowledge and Data Engineering, 24(4):745– 758. [Hochschild1975] Arlie Russell Hochschild. 1975. The sociology of feeling and emotion: Selected possibilities. Sociological Inquiry, 45(2-3):280–307. [Hu et al.2013] Xia Hu, Lei Tang, Jiliang Tang, and Huan Liu. 2013. Exploiting social relations for sentiment analysis in microblogging. In Proceedings of the 6th WSDM, pages 537–546. ACM. [Jacob et al.2009] Laurent Jacob, Jean-philippe Vert, and Francis R Bach. 2009. Clustered multi-task learning: A convex formulation. In NIPS, pages 745–752. [Kiwiel2001] Krzysztof C Kiwiel. 2001. Convergence and efficiency of subgradient methods for quasiconvex minimization. Mathematical programming, 90(1):1–25. [Li et al.2010] Guangxia Li, Steven CH Hoi, Kuiyu Chang, and Ramesh Jain. 2010. Micro-blogging sentiment detection by collaborative online learning. In ICDM, pages 893–898. IEEE. [Liu2012] Bing Liu. 2012. Sentiment analysis and opinion mining. Synthesis Lectures on Human Language Technologies, 5(1):1–167. [Max2014] Woolf Max. 2014. A statistical analysis of 1.2 million amazon reviews. http://minimaxir.com/2014/06/ reviewing-reviews. [McAuley et al.2015] Julian McAuley, Rahul Pandey, and Jure Leskovec. 2015. Inferring networks of substitutable and complementary products. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 785–794. ACM. [Ostrom2014] Elinor Ostrom. 2014. Collective action and the evolution of social norms. Journal of Natural Resources Policy Research, 6(4):235–252. [Pan and Yang2010] Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. Knowledge and Data Engineering, IEEE Transactions on, 22(10):1345–1359. [Pan et al.2010] Sinno Jialin Pan, Xiaochuan Ni, JianTao Sun, Qiang Yang, and Zheng Chen. 2010. Cross-domain sentiment classification via spectral feature alignment. In Proceedings of the 19th WWW, pages 751–760. ACM. [Pang and Lee2005] Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43rd ACL, pages 115–124. ACL. [Pang and Lee2008] Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations and trends in information retrieval, 2(1-2):1– 135. [Pang et al.2002] Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In Proceedings of EMNLP, pages 79–86. ACL. 864 [Schein et al.2002] Andrew I Schein, Alexandrin Popescul, Lyle H Ungar, and David M Pennock. 2002. Methods and metrics for cold-start recommendations. In Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval, pages 253–260. ACM. [Sherif1936] Muzafer Sherif. 1936. The psychology of social norms. [Shott1979] Susan Shott. 1979. Emotion and social life: A symbolic interactionist analysis. American journal of Sociology, pages 1317–1334. [Tan et al.2011] Chenhao Tan, Lillian Lee, Jie Tang, Long Jiang, Ming Zhou, and Ping Li. 2011. Userlevel sentiment analysis incorporating social networks. In Proceedings of the 17th ACM SIGKDD, pages 1397–1405. ACM. [Titov and McDonald2008] Ivan Titov and Ryan T McDonald. 2008. A joint model of text and aspect ratings for sentiment summarization. In ACL, volume 8, pages 308–316. Citeseer. [Wang et al.2011] Hongning Wang, Yue Lu, and ChengXiang Zhai. 2011. Latent aspect rating analysis without aspect keyword supervision. In Proceedings of the 17th ACM SIGKDD, pages 618–626. ACM. [Wang et al.2013] Hongning Wang, Xiaodong He, Ming-Wei Chang, Yang Song, Ryen W White, and Wei Chu. 2013. Personalized ranking model adaptation for web search. In Proceedings of the 36th ACM SIGIR, pages 323–332. ACM. [Wiebe et al.2005] Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. Language resources and evaluation, 39(2-3):165–210. [Yang and Pedersen1997] Yiming Yang and Jan O Pedersen. 1997. A comparative study on feature selection in text categorization. In ICML, volume 97, pages 412–420. [Yelp2016] Yelp. 2016. Yelp dataset challenge. https://www.yelp.com/dataset_ challenge. [Zhu et al.1997] Ciyou Zhu, Richard H Byrd, Peihuang Lu, and Jorge Nocedal. 1997. Algorithm 778: Lbfgs-b: Fortran subroutines for large-scale boundconstrained optimization. ACM Transactions on Mathematical Software (TOMS), 23(4):550–560. 865
2016
81
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 866–875, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Modeling Concept Dependencies in a Scientific Corpus Jonathan Gordon, Linhong Zhu, Aram Galstyan, Prem Natarajan, and Gully Burns USC Information Sciences Institute Marina del Rey, CA, USA {jgordon, linhong, galstyan, pnataraj, burns}@isi.edu Abstract Our goal is to generate reading lists for students that help them optimally learn technical material. Existing retrieval algorithms return items directly relevant to a query but do not return results to help users read about the concepts supporting their query. This is because the dependency structure of concepts that must be understood before reading material pertaining to a given query is never considered. Here we formulate an information-theoretic view of concept dependency and present methods to construct a “concept graph” automatically from a text corpus. We perform the first human evaluation of concept dependency edges (to be published as open data), and the results verify the feasibility of automatic approaches for inferring concepts and their dependency relations. This result can support search capabilities that may be tuned to help users learn a subject rather than retrieve documents based on a single query. 1 Introduction Corpora of technical documents, such as the ACL Anthology, are valuable for learners, but it can be difficult to find the most appropriate documents to read in order to learn about a concept. This problem is made more complicated by the need to trace the ideas back to those that need to be learned first (e.g., before you can learn about Markov logic networks, you should understand first-order logic and probability). That is, a crucial question when learning a new subject is “What do I need to know before I start reading about this?” To answer this question, learners typically rely on the guidance of domain experts, who can devise pedagogically valuable reading lists that order docAutomatic Speech Recognition (ASR) with HMMs Noisy Channel Model Viterbi Decoding for ASR Training ASR Parameters Viterbi Algorithm Dynamic Programming Decoding/ Search Problem HMMs Markov Chains HMM Pronunciation Lexicon Iterative Parameter Estimation with EM Gaussian Acoustic Model Discrete Fourier Transforms Gaussian Mixture Models Phonemes N-gram Language Model Figure 1: A human-authored concept graph excerpt, showing possible concepts related to automatic speech recognition and their concept dependencies. uments to progress from prerequisite to target concepts. Thus, it is desirable to have a model where each concept is linked to the prerequisite concepts it depends upon – a concept graph. A manually constructed concept graph excerpt related to automatic speech recognition is shown in Figure 1. The dependency relation between two concepts is interpreted as whether understanding one concept would help a learner understand the other. Representing a scientific corpus in this way can improve tasks such as curriculum planning (Yang et al., 2015), automatic reading list generation (Jardine, 2014), and improving education quality (Rouly et al., 2015). Motivated by the importance of representing the content of a scientific corpus as a concept graph, the challenge we address in this work is to automatically infer the concepts and their dependency relations. Towards this end, we first instantiate each concept as a topic from statistical topic modeling (Blei et al., 2003). To link concepts with directed depen866 dency edges, we propose the use of informationtheoretic measures, which we compare against baseline methods of computing word similarity, hierarchical clustering, and citation prediction. We then gather human annotations of concept graph nodes and edges learned from the ACL Anthology, which we use to evaluate these methods. The main contributions of this paper are: 1 We introduce the concept graph representation for modeling the technical concepts in a corpus and their relations. 2 We present information-theoretic approaches to infer concept dependence relations. 3 We perform the first human annotation of concept dependence for a technical corpus. 4 We release the human annotation data for use in future research. In the following section, we contrast this problem with previous work. We then describe the concept graph framework (Section 3) and present automatic approaches for inferring concept graphs (Section 4). The details of human evaluation are presented in Section 5. We discuss some interesting open questions related to this work in Section 6 before concluding this work. 2 Related Work There is a long history of work on identifying structure in the contents of a text corpus. Our approach is to link documents to concepts and to model relations among these concepts rather than to identify the specific claims (Sch¨afer et al., 2011) or empirical results (Choi et al., 2016) in each document. In this section, we first provide an overview of different relations between concepts, followed by discussion of some representative methods for inferring them. We briefly discuss the differences between these relations and the concept dependency relation we are interested in. Similarity Concepts are similar to the extent that they share content. Grefenstette (1994) applied the Jaccard similarity measure to relate concepts to each other. White and Jose (2004) empirically studied 10 similarity metrics on a small sample of 10 pairs of topics, and the results suggested that correlation-based measures best match general subject perceptions of search topic similarity. Hierarchy Previous work on linking concepts has usually been concerned with forming subsumption hierarchies from text (Woods, 1997; Sanderson and Croft, 1999; Cimiano et al., 2005) – e.g., Machine translation is part of Natural language processing – and more recent work does so for statistical topic models. Jonyer et al. (2002) applied graph-based hierarchical clustering to learn hierarchies from both structured and unstructured data. Ho et al. (2012) learn a topic taxonomy from the ACL Anthology and from Wikipedia with a method that scales linearly with the number of topics and the tree depth. Other relations Every pair of concepts is statistically correlated with each other based on word co-occurrence (Blei and Lafferty, 2006) providing a simple baseline metric for comparison. For a topic modeling approach performed over document citation links rather than over words or n-grams, Wang et al. (2013) gave a topic A’s dependence on another topic B as the probability of a document in A citing a document in B. Our approach to studying concept dependence differs from the relations derived from similarity, hierarchy, correlation and citation mentioned above, but intuitively they are related. We thus adapt one representative method for the similarity (Grefenstette, 1994), hierarchy (Jonyer et al., 2002), and citation likelihood (Wang et al., 2013) relations as baselines for computing concept dependency relations in Section 4.2.3. Concept dependence is also related to curriculum planning. Yang et al. (2015) and Talukdar and Cohen (2012) studied prerequisite relationships between course material documents based on external information from Wikipedia. They assumed that hyperlinks between Wikipedia pages and course material indicate a prerequisite relationship. With this assumption, Talukdar and Cohen (2012) use crowdsourcing approaches to obtain a subset of the prerequisite structure and train a maximum entropy– based classifier to identify the prerequisite structure. Yang et al. (2015) applied both classification and learning to rank approaches in order to classify or rank prerequisite structure. 3 Concept Graph Representation of a Text Corpus We represent the scientific literature as a labeled graph, where nodes represent both documents and concepts – and, optionally, metadata (such as author, title, conference, year) and features (such as 867                          Figure 2: The Concept Graph Data Schema. Each node is a class and edges are named relations between classes (with associated attributes). words, or n-grams) – and labeled edges represent the relations between nodes. Figure 2 shows an example schema for a concept graph representation for a scientific corpus. Concepts are abstract and require a concrete representation. In this work, we use statistical topic modeling, where each topic – a multinomial distribution over a vocabulary of words – is taken as a single concept. Documents are linked to concepts by weighted edges, which can be derived from the topic model’s document–topic composition distributions. Other approaches to identifying concepts are considered in Section 6. Concepts exhibit various relations to other concepts, such as hierarchy, connecting more general and more specific concepts; similarity; and correlation. We model each concept as a node and concept-to-concept relations as directed, weighted, labeled edges. The label of an edge denotes the type of relation, such as “is similar to”, “depends on”, and “relates to”, and the weights represent the strength of different relations. In this work, we focus on concept dependency, which is the least studied of these relations and, intuitively, the most important for learners. We consider there to be a dependency relation between two concepts if understanding one concept would help you to understand the other. This notion forms the core of our human-annotated data set which demonstrates that this idea is meaningful and robust for expert annotators when asked to judge if there exists a dependency relation between two concepts defined by LDA topics (see Section 5.2). 4 Learning the Concept Graph 4.1 Identifying Concepts The representation of concepts using topics is very general, and any effective topic modeling approach can be applied. These include probabilistic latent semantic indexing (PLSI) (Hofmann, 1999), latent Dirichlet allocation (LDA) (Blei et al., 2003), and non-negative matrix factorization (NMF) (Arora et al., 2012). In our experiments, we use the opensource tool Mallet (McCallum, 2002), which provides a highly scalable implementation of LDA; see Section 5.1 for more details. 4.2 Discovering Concept Dependency Relations Identifying concept dependency relations between topics is the key step for building a useful concept graph. These relations add semantic structure to the contents of the text corpus, and they facilitate search and ordering in information retrieval. In this section, as a proof-of-concept, we propose two information-theoretic approaches to learn concept dependency relations: an approach based on cross entropy and another based on information flow. 4.2.1 Cross-entropy Approach The intuition of the cross-entropy approach is simple: Given concepts ci and cj, if most of the instances of ci can be explained by the occurrences of cj, but not vice versa, it is likely that ci depends on cj. For example, if ci is Markov logic networks (MLNs) and cj is Probability, we might say that observing MLNs depends on seeing Probability since most of the times that we see MLNs, we also see Probability, but the opposite does not hold. Given concepts ci and cj, the cross-entropy approach predicts that ci depends on cj if they satisfy these conditions: 1 The distribution of ci is better approximated by that of cj than the distribution of cj is approximated by that of ci. 2 The co-occurrence frequency of instances of ci and cj is relatively higher than that of a nondependency pair. Therefore, to predict the concept dependency relation, we need to examine whether the distribution of ci could well approximate the distribution of cj and the joint distribution of ci and cj. For this, we use cross entropy and joint entropy: Cross entropy measures the difference between two distributions. Specifically, the cross entropy for the distributions X and Y over a given set is defined as: H(X;Y) = H(X)+DKL(X||Y) (1) 868 where H(X) is the entropy of X, and DKL(X||Y) is the Kullback–Leibler divergence of an estimated distribution Y from true distribution X. Therefore, H(X;Y) examines how well the distribution of Y approximates that of X. Joint entropy measures the information we obtained when we observe both X and Y. The joint Shannon entropy of two variables X and Y is defined as: H(X,Y) = ∑ X ∑ Y P(X,Y)log2 P(X,Y) (2) where P(X,Y) is the joint probability of these values occurring together. Based on the conditions listed above and these definitions, we say that ci depends on cj if and only if they satisfy the following constraints: H(ci;cj) > H(cj;ci) H(ci,cj) ≤θ (3) with θ as a threshold value, which can be interpreted as “the average joint entropy of any nondependence concepts”. The weight of the dependency is defined as: DCE(ci,cj) = H(ci;cj) The cross-entropy method is general and can be applied to different distributions used to model concepts, such as distributions of relevant words, of relevant documents, or of the documents that are cited by relevant documents. 4.2.2 Information-flow Approach Now we consider predicting concept dependency relations from the perspective of navigating information. Imagine that we already have a perfect concept dependency graph. When we are at a concept node (e.g., reading a document about it), the navigation is more likely to continue to a concept it depends on than to other concepts that it doesn’t depend on. To give a concrete example, if we are navigating from the concept Page rank, it is more likely for us to jump to Eigenvalue than to Language model. Therefore, if concept ci depends on concept cj, then cj generally receives more navigation hits than ci and has higher “information flow”. Based on this intuition, we can predict concept dependency relations using information flow: Given concepts ci and cj, ci depends on cj if they satisfy these conditions: Parallel corpora for machine translation Japanese 0.31 Comparable corpora 0.01 Collocation 0.02 Data structures 0.06 Idiomatic expressions 0.08 Beam search & other search algorithms Objective functions 0.04 Machine translation models 0.03 0.15 Machine translation systems 0.29 0.31 Computational linguistics (discipline) 0.25 Paraphrase generation Textual entailment 0.04 Machine translation evaluation 0.09 Human assessment 0.11 Figure 3: A concept graph excerpt related to machine translation, where concepts are linked based on cross entropy. Concepts are represented by manually chosen names, and links to documents are omitted. 1 The concept ci receives relatively lower navigation hits than cj. 2 The number of navigation traces from concept ci to cj is much stronger than that to another non-dependent concept ck. While we do not have data for human navigation between concepts, a natural way to simulate this is through information flow. As proposed by Rosvall and Bergstrom (2008), we use the probability flow of random walks on a network as a proxy for information flow in the real system. Given any observed graph G, the information score I(v) of a node v, is defined as its steady state visit frequency. The information flow I(u,v) from node u to node v, is consequently defined as the transition probability (or “exit probability”) from u to v. To this end, we construct a graph connecting concepts by their co-occurrences in documents, and we can use either Map Equation (Rosvall and 869 Bergstrom, 2008) or Content Map Equation (Smith et al., 2014) to compute the information flow network and the information score for each concept node. The details are outlined as follows: 1 Construct a concept graph Gco based on cooccurrence observations. We define weighted, undirected edges within the concept graph based on the number of documents in which the concepts co-occur. Formally, given concepts ci and cj and a threshold 0 ≤τ ≤1, the weighted edge is calculated as: wco(ci,cj) = ( ∑d p(ci|d)p(cj|d) if p(c|d) > τ 0 otherwise (4) 2 Given the graph Gco, we compute the information score I(c) for each concept node c and information flow I(ci,cj) between a pair of nodes ci and cj. For the details of calculating I(c) and I(ci,cj), refer to Map Equation (Rosvall and Bergstrom, 2008) and Content Map Equation (Smith et al., 2014). 3 Given two concepts ci and cj, we link ci to cj with a directed edge if I(ci) > I(cj) with weight: DIF(ci,cj) = I(ci,cj) The information flow approach for inferring dependency can be further improved with a few true human navigation traces. As introduced earlier, the concept graph representation facilitates applications such as reading list generation, and document retrieval. Those applications enable the collection of human navigation traces, which can provide a better approximation of dependency relation. 4.2.3 Baseline Approaches Similarity Relations Intuitively, concepts that are more similar (e.g., Machine translation and Machine translation evaluation) are more likely to be connected by concept dependency relations than less similar concepts are. As a baseline, we compute the Jaccard similarity coefficient based on the top 20 words or n-grams in the concept’s topic word distributions. Hierarchical Relations Previous work has looked at learning hierarchies that connect broader topics (acting as equivalent proxies for concepts in our work) to more specific subtopics (Cimiano et al., 2005; Sanderson and Croft, 1999). We compare against a method for doing so to see how close identifying hierarchical relations comes to our goal of identifying concept dependency relations. Specifically, we perform agglomerative clustering over the topic–topic co-occurrence graph Gco with weights defined in Eq. 4, in order to obtain the hierarchical representation for concepts. Citation-based Given concepts ci and cj, if the documents that are highly related to cj are cited by most of the instances of ci, ci may depend on cj. Wang et al. (2013) used this approach in the context of CitationLDA topic modeling, where topics are learned from citation links rather than text. We adapt this for regular LDA so that the concept ci depends on cj with weight DCite(ci,cj) = ∑ d1∈D ∑ d2∈Cd1 T1,iT2,j (5) where D is the set of all documents, Cd are the documents cited by d, and Tx,y is the distribution of documents dx composed of concepts cy. For this method, we return a score of 0 if the concepts do not co-occur in at least three documents. 5 Evaluation of Concept Graphs There are two main approaches to evaluating a concept graph: We can directly evaluate the graph, using human judgments to measure the quality of the concepts and the reliability of the links between them. Alternatively, we can evaluate the application of a concept graph to a task, such as ordering documents for a reading list or recommending documents to cite when writing a paper. Our motivation to build a concept graph from a technical corpus is to improve performance at the task of reading list generation. However, an applied evaluation makes it harder to judge the quality of the concept graph itself. Each document contains a combination of concepts, which have different ordering restrictions, and other factors also affect the quality of a reading list, such as the classification of document difficulty and type (e.g., survey, tutorial, or experimental results). As such, we focus on a direct human evaluation of our proposed methods for building a concept graph and leave the measure of applied performance to future work. 5.1 Corpus and its Evaluation Concept Graphs For this evaluation, the scientific corpus we use is the ACL Anthology. This consists of articles published in a variety of journals, conferences, 870 and workshops related to computational linguistics. Specifically, we use a modified copy of the plain text distributed for the ACL Anthology Network (AAN), release 2013 (Radev et al., 2013), which includes 23,261 documents from 1965 to 2013. The AAN includes plain text for documents, with OCR performed using PDFBox. We manually substituted OmniPage OCR output from the ACL Anthology Reference Corpus, version 1 (Bird et al., 2008) for documents where it was observed to be of higher quality. The text was processed to join words that were split across lines with hyphens. We manually removed documents that were not written in English or where text extraction failed, leaving 20,264 documents, though this filtering was not exhaustive. The topic model we used was built using the Mallet (McCallum, 2002) implementation of LDA. It is composed of bigrams, filtered of typical English stop words before the generation of bigrams, so that, e.g., “word to word” yields the bigram “word word”. We generated topic models consisting of between 20 and 400 topics and selected a 300-topic model based on manual inspection. Documents were linked to concepts based on the document’s LDA topic composition. The concept nodes for each topic were linked in concept dependency relations using each of the methods described in Section 4, producing five concept graphs to evaluate. We applied the general cross-entropy method to the distribution of top-k bigrams for each concept. For all methods, the results we report are for k = 20. Changing this value shifts the precision– recall trade-off, but in our experiments, the relative performance of the methods are generally consistent for different values of k. Since it is impractical to manually annotate all pairs of concept nodes from a 300-node graph, we selected a subset of edges for evaluation. Intuitively, the evaluation set should satisfy the following sampling criteria: (1) The evaluation set should cover the top weighted edges for a precision evaluation. (2) The evaluation set should cover the bottomweighted edges for a recall evaluation. (3) The evaluation set should provide low-biased sampling. With respect to these requirements, we generated an evaluation edge set as the union of the following three sets: 1 Top-20 edges for each approach (including baseline approaches) 2 A random shuffle selection from the union of Judges All Coherent Related Dependent Non-NLP 0.407 0.446 0.305 0.329 NLP 0.526 0.610 0.448 0.395 All 0.467 0.529 0.354 0.357 Table 1: Inter-annotator agreement measured as Pearson correlation. Relevant phrases: machine translation, translation system, mt system, transfer rules, mt systems, lexical transfer, analysis transfer, translation process, transfer generation, transfer component, analysis synthesis, transfer phase, analysis generation, structural transfer, transfer approach, human translation, transfer grammar, analysis phase, translation systems, transfer process Relevant documents: • Slocum: Machine Translation: Its History, Current Status, and Future Prospects (89%) • Slocum: A Survey of Machine Translation: Its History, Current Status, and Future Prospects (89%) • Wilks, Carbonnell, Farwell, Hovy, Nirenburg: Machine Translation Again? (56%) • Slocum: An Experiment in Machine Translation (55%) • Krauwer, Des Tombe: Transfer in a Multilingual MT System (54%) Figure 4: An example of the presentation of a topic for human evaluation. the top-50 and bottom-50 edges in terms of the baseline word similarity.1 3 A random shuffle section from the union of top100 edges in terms of the proposed approaches. 5.2 Human Annotation For annotation, we present pairs of topics followed by questions. Each topic is presented to a judge as a list of the most relevant bigrams in descending order of their topic-specific “collapsed” probabilities. These are presented in greyscale so that the most relevant items appear black, fading through grey to white as the strength of that item’s association with the topic decreases. The evaluation interface also lists the documents that are most relevant to the topic, linked to the original PDFs. These documents can be used to clarify the occurrence of unfamiliar terms, such as author names or common examples that may show up in the topic representation. An example topic is shown in Figure 4. For each topic, judges were asked: 1 How clear and coherent is Topic 1? 2 How clear and coherent is Topic 2? 1We observe that usually if the edge strength in terms of one of the information-theoretic methods is zero, the word similarity is zero as well, but if the word similarity is zero, the edge strength in terms of the proposed methods may be non-zero. 871 Edges Top 20 Top 150 All scores > 0 Prec. Prec. Rec. f1 Prec. Rec. f1 Cross entropy (DCE) 0.851 0.765 0.358 0.487 0.693 0.670 0.681 Information flow (DIF) 0.793 0.696 0.311 0.429 0.693 0.323 0.441 Word similarity (DSim) 0.808 0.768 0.382 0.511 0.768 0.382 0.511 Hierarchy (DHier) 0.680 0.692 0.297 0.416 0.686 0.638 0.661 Cite (DCite) 0.693 0.718 0.343 0.465 0.693 0.670 0.681 Random 0.659 0.661 0.580 0.500 0.658 1.000 0.794 Table 2: Precision, recall, and f-scores (with different thresholds for which edges are included) for the methods of predicting dependency relations between concepts described in Section 4.2. If both topics are at least somewhat clear: 3 How related are these topics? 4 Would understanding Topic 1 help you to understand Topic 2? 5 Would understanding Topic 2 help you to understand Topic 1? For each question, they could answer “I don’t know” or select from an ordinal scale: 1 Not at all 2 Somewhat 3 Very much The evaluation was completed by eight judges with varying levels of familiarity with the technical domain. Four judges are NLP researchers: Three PhD students working in the area and one of the authors. Four judges are familiar with NLP but have less experience with NLP research: two MS students, an AI PhD student, and one of the authors. The full evaluation was divided into 10 sets taking a total of around 6–8 hours per person to annotate. Their overall inter-annotator agreement and the agreement for each question type is given in Table 1. Agreement is higher when we consider only judgments from NLP researchers, but in all cases is moderate, indicating the difficulty of interpreting statistical topics as concepts and judging the strength (if any) of the concept dependency relation between them. The topic coherence judgments that were collected served to make each human judge consider how well she understood each topic before judging their dependence. The topic relatedness questions provided an opportunity to indicate that if the annotator recognized a relation between the topics without needing to say that their was a dependence. 5.3 Evaluation of Automatic Methods To measure the quality of the concept dependency edges in our graphs, we compute the average precision for the strongest edges in each concept graph, up to three thresholds: the top 20 edges, the top 150, and all edges with strength > 0. These precision scores are in Table 2 as well as the corresponding recall, and f1 scores for the larger thresholds. Despite the difference in inter-annotator agreement reported in Table 1, the ordering of methods by precision is the same whether we consider only the judgments of NLP experts, non-NLP judges, or everyone, so we only report the average across all annotators. When we examine the results of precision at 20 – the strongest edges predicted by each method – we see that the cross-entropy method performs best. For comparison, we report the accuracy of a baseline of random numbers between 0 and 1. While all methods have better than chance precision, the random baseline has higher recall since it predicts a dependency relation of non-zero strength for all pairs. As we consider edges predicted with lower confidence, the word similarity approach shows the highest precision. A limitation of the word similarity baseline is that it is symmetric while concept dependence relations can be asymmetric. Annotators marked many pairs of concepts as being at least somewhat co-dependent. E.g., understanding Speech recognition strongly helps you understand Natural language processing, but being familiar with this broader topic also somewhat helps you understand the narrower one. The precision scores we report count both annotations of concept dependence (“Somewhat” and “Very much”) as positive predictions, but other evaluation metrics might show a greater benefit for methods like DCE that can predict dependency with asymmetric strengths. 6 Discussion Another natural evaluation of an automatically generated concept graph would be to compare it to a 872 Machine transliteration Parallel corpora for machine translation 2.672.67 Word alignment 2.33 Machine translation models 2.33 Machine translation evaluation 2.33 Machine translation systems 2.33 Sentence alignment 2.67 Part-of-speech tagging 2.33 Comparable corpora 2.67 2.67 3.00 2.67 Reordering model 3.00 Beam search & other search algorithms 3.00 Phrase-based machine translation 3.00 Language model 3.00 2.67 2.33 Human assessment 2.33 3.00 3.00 Hidden Markov models 3.00 Coding scheme 2.50 Annotation 3.00 2.50 2.67 Data models for linguistic annotation 2.332.67 2.67 2.67 2.50 Figure 5: A concept graph excerpt related to machine translation, where concepts are joined based on the judgments of human annotators. Concepts are represented by manually chosen names, and links to documents are omitted. human-generated gold standard, where an expert has created concept nodes at the optimal level of generality and linked these by her understanding of the conceptual dependencies among concepts in the domain. However, there are several difficulties with this approach: (1) It is quite labor-intensive to manually generate a concept graph; (2) we expect only moderate agreement between graphs produced by different experts, who have different ideas of what concepts are important and distinct and which concepts are important to understanding others; and (3) the concept graphs we learn from a collection of documents will differ significantly from those we imagine, without these differences necessarily being better or worse. In this work, we assume that a topic model provides a reasonable proxy for the concepts a person might identify in a technical corpus. However, topic modeling approaches are better at finding general areas of research than at identifying fine-grained concepts like those shown in Figure 1. The concept graph formalism can be extended with the use of discrete entities, identified by a small set of names, e.g., (First-order logic, FOL). We have performed initial work on two approaches to extract entities: 1 We can use an external reference, Wikipedia, to help entity extraction. We count the occurrences of each article title in the scientific corpus, and we keep the high-frequency titles as entities. For example, in the ACL Anthology corpus, we obtain 56 thousand entities (page titles) that occurred at least once and 1,123 entities that occur at least 100 times. 2 We cannot assume that the important entities in every scientific or technical corpus will be well-represented on Wikipedia. In the absence of a suitable external reference source, we can use the open-source tool SKIMMR (Nov´aˇcek and Burns, 2014) or the method proposed by Jardine (2014) to extract important noun phrases to use as entities. The importance of a potential entity can be computed based on the occurrence frequency and the sentence-level co-occurrence frequency with other phrases. Another limitation of using a topic model like LDA as a proxy for concepts is that the topics are static, while a corpus may span decades of research. Studying how latent models might evolve or “drift” over time within a textual corpus describing a technical discipline is an important research question, and our approach could be extended to add or remove topics in a central model over time. Despite its limitations, a topic model is useful for automatically discovering concepts in a corpus even if the concept is not explicitly mentioned in a document (e.g., the words “axiom” or “predi873 cate” might indicate discussion of logic) or has no canonical name. The concept graph representation allows for the introduction of additional or alternative features for concepts, making it suitable for new methods of identifying and linking concepts. 7 Conclusions Problems such as reading list generation require a representation of the structure of the content of a scientific corpus. We have proposed the concept graph framework, which gives weighted links from documents to the concepts they discuss and links concepts to one another. The most important link in the graph is the concept dependency relation, which indicates that one concept helps a learner to understand another, e.g., Markov logic networks depends on Probability. We have presented four approaches to predicting these relations. We propose information-theoretic measures based on cross entropy and on information flow. We also present baselines that compute the similarity of the word distributions associated with each concept, the likelihood of a citation connecting the concepts, and a hierarchical clustering approach. While word similarity proves a strong baseline, the strongest edges predicted by the crossentropy approach are more precise. We are releasing human annotations of concept nodes and possible dependency edges learned from the ACL Anthology as well as implementations of the methods described in this paper to enable future research on modeling scientific corpora.2 Acknowledgments The authors thank Yigal Arens, Emily Sheng, and Jon May for their valuable feedback on this work. This work was supported by the Intelligence Advanced Research Projects Activity (IARPA) via the Air Force Research Laboratory. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, AFRL, or the U.S. Government. 2The code and data associated with this work are available at http://techknacq.isi.edu References Sanjeev Arora, Rong Ge, and Ankur Moitra. 2012. Learning topic models – going beyond SVD. In Proceedings of the 53rd Annual Symposium on Foundations of Computer Science, pages 1–10. IEEE. Steven Bird, Robert Dale, Bonnie Dorr, Bryan Gibson, Mark Joseph, Min-Yen Kan, Dongwon Lee, Brett Powley, Dragomir Radev, and Yee Fan Tan. 2008. The ACL Anthology Reference Corpus: A reference dataset for bibliographic research in computational linguistics. In Proceedings of the Sixth International Conference on Language Resources and Evaluation, Marrakech, Morocco, May. European Language Resources Association. David Blei and John Lafferty. 2006. Correlated topic models. In Advances in Neural Information Processing Systems. David M. Blei, Andew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993–1022. Eunsol Choi, Matic Horvat, Jon May, Kevin Knight, and Daniel Marcu. 2016. Extracting structured scholarly information from the machine translation literature. In Proceedings of the 10th International Conference on Language Resources and Evaluation. European Language Resources Association. Philipp Cimiano, Andreas Hotho, and Steffen Staab. 2005. Learning concept hierarchies from text corpora using formal concept analysis. Journal of Artificial Intelligence Research, 24(1):305–39, August. Gregory Grefenstette. 1994. Explorations in Automatic Thesaurus Discovery. Kluwer Academic Publishers, Norwell, MA, USA. Qirong Ho, Jacob Eisenstein, and Eric P. Xing. 2012. Document hierarchies from text and links. In Proceedings of the International World Wide Web Conference, April. Thomas Hofmann. 1999. Probabilistic latent semantic indexing. In Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 50–7. ACM. James G. Jardine. 2014. Automatically generating reading lists. Technical Report UCAM-CL-TR848, University of Cambridge Computer Laboratory, February. Istvan Jonyer, Diane J. Cook, and Lawrence B. Holder. 2002. Graph-based hierarchical conceptual clustering. Journal of Machine Learning Research, 2:19– 43, March. Andrew McCallum. 2002. MALLET: A machine learning for language toolkit. http://mallet.cs.umass. edu. 874 V´ıt Nov´aˇcek and Gully APC Burns. 2014. SKIMMR: Facilitating knowledge discovery in life sciences by machine-aided skim reading. PeerJ, 2:e483. Dragomir R. Radev, Pradeep Muthukrishnan, Vahed Qazvinian, and Amjad Abu-Jbara. 2013. The ACL Anthology Network Corpus. Language Resources and Evaluation, pages 1–26. Martin Rosvall and Carl T. Bergstrom. 2008. Maps of random walks on complex networks reveal community structure. Proceedings of the National Academy of Sciences, 105(4):1118–23. Jean Michel Rouly, Huzefa Rangwala, and Aditya Johri. 2015. What are we teaching?: Automated evaluation of CS curricula content using topic modeling. In Proceedings of the Eleventh Annual International Conference on International Computing Education Research, pages 189–197. Mark Sanderson and Bruce Croft. 1999. Deriving concept hierarchies from text. In Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 206–13, New York, NY, USA. ACM. Ulrich Sch¨afer, Bernd Kiefer, Christian Spurk, J¨org Steffen, and Rui Wang. 2011. The ACL Anthology Searchbench. In Proceedings of the ACL-HLT 2011 System Demonstrations, pages 7–13. Laura M. Smith, Linhong Zhu, Kristina Lerman, and Allon G. Percus. 2014. Partitioning networks with node attributes by compressing information flow. arXiv preprint arXiv:1405.4332. Partha Pratim Talukdar and William W. Cohen. 2012. Crowdsourced comprehension: Predicting prerequisite structure in Wikipedia. In Proceedings of the Seventh Workshop on Building Educational Applications Using NLP, pages 307–15. Association for Computational Linguistics. Xiaolong Wang, Chengxiang Zhai, and Dan Roth. 2013. Understanding evolution of research themes: A probabilistic generative model for citations. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1115–23, New York, NY, USA. ACM. Ryen W. White and Joemon M. Jose. 2004. A study of topic similarity measures. In Proceedings of the 27th Annual International ACM SIGIR Conference on Research and development in Information Retrieval, pages 520–1. ACM. William A. Woods. 1997. Conceptual indexing: A better way to organize knowledge. Technical report, Sun Microsystems, Inc., Mountain View, CA, USA. Yiming Yang, Hanxiao Liu, Jaime Carbonell, and Wanli Ma. 2015. Concept graph learning from educational data. In Proceedings of the Eighth ACM International Conference on Web Search and Data Mining, pages 159–68. ACM. 875
2016
82
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 876–886, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Normalized Log-Linear Interpolation of Backoff Language Models is Efficient Kenneth Heafield University of Edinburgh 10 Crichton Street Edinburgh EH8 9AB United Kingdom [email protected] Chase Geigle Sean Massung University of Illinois at Urbana-Champaign 707 S. Mathews Ave. Urbana, IL 61801 United States {geigle1,massung1,lanes}@illinois.edu Lane Schwartz Abstract We prove that log-linearly interpolated backoff language models can be efficiently and exactly collapsed into a single normalized backoff model, contradicting Hsu (2007). While prior work reported that log-linear interpolation yields lower perplexity than linear interpolation, normalizing at query time was impractical. We normalize the model offline in advance, which is efficient due to a recurrence relationship between the normalizing factors. To tune interpolation weights, we apply Newton’s method to this convex problem and show that the derivatives can be computed efficiently in a batch process. These findings are combined in new open-source interpolation tool, which is distributed with KenLM. With 21 out-of-domain corpora, log-linear interpolation yields 72.58 perplexity on TED talks, compared to 75.91 for linear interpolation. 1 Introduction Log-linearly interpolated backoff language models yielded better perplexity than linearly interpolated models (Klakow, 1998; Gutkin, 2000), but experiments and adoption were limited due the impractically high cost of querying. This cost is due to normalizing to form a probability distribution by brute-force summing over the entire vocabulary for each query. Instead, we prove that the log-linearly interpolated model can be normalized offline in advance and exactly expressed as an ordinary backoff language model. This contradicts Hsu (2007), who claimed that log-linearly interpolated models “cannot be efficiently represented as a backoff n–gram model.” We show that offline normalization is efficient due to a recurrence relationship between the normalizing factors (Whittaker and Klakow, 2002). This forms the basis for our opensource implementation, which is part of KenLM: https://kheafield.com/code/kenlm/. Linear interpolation (Jelinek and Mercer, 1980), combines several language models pi into a single model pL pL(wn | wn−1 1 ) = X i λipi(wn | wn−1 1 ) where λi are weights and wn 1 are words. Because each component model pi is a probability distribution and the non-negative weights λi sum to 1, the interpolated model pL is also a probability distribution. This presumes that the models have the same vocabulary, an issue we discuss in §3.1. A log-linearly interpolated model pLL uses the weights λi as powers (Klakow, 1998). pLL(wn | wn−1 1 ) ∝ Y i pi(wn | wn−1 1 )λi The weights λi are unconstrained real numbers, allowing parameters to soften or sharpen distributions. Negative weights can be used to divide a mixed-domain model by an out-of-domain model. To form a probability distribution, the product is normalized pLL(wn | wn−1 1 ) = Q i pi(wn | wn−1 1 )λi Z(wn−1 1 ) where normalizing factor Z is given by Z(wn−1 1 ) = X x Y i pi(x | wn−1 1 )λi The sum is taken over all words x in the combined vocabulary of the underlying models, which can number in the millions or even billions. Computing Z efficiently is a key contribution in this work. Our proofs assume the component models pi are backoff language models (Katz, 1987) that memorize probability for seen n–grams and charge a 876 backoff penalty bi for unseen n–grams. pi(wn | wn−1 1 ) = ( pi(wn | wn−1 1 ) if wn 1 is seen pi(wn | wn−1 2 )bi(wn−1 1 ) o.w. While linearly or log-linearly interpolated models can be queried online by querying the component models (Stolcke, 2002; Federico et al., 2008), doing so costs RAM to store duplicated n–grams and CPU time to perform lookups. Log-linear interpolation is particularly slow due to normalizing over the entire vocabulary. Instead, it is preferable to combine the models offline into a single backoff model containing the union of n–grams. Doing so is impossible for linear interpolation (§3.2); SRILM (Stolcke, 2002) and MITLM (Hsu and Glass, 2008) implement an approximation. In contrast, we prove that offline log-linear interpolation requires no such approximation. 2 Related Work Instead of building separate models then weighting, Zhang and Chiang (2014) show how to train Kneser-Ney models (Kneser and Ney, 1995) on weighted data. Their work relied on prescriptive weights from domain adaptation techniques rather than tuning weights, as we do here. Our exact normalization approach relies on the backoff structure of component models. Several approximations support general models: ignoring normalization (Chen et al., 1998), noisecontrastive estimation (Vaswani et al., 2013), and self-normalization (Andreas and Klein, 2015). In future work, we plan to exploit the structure of other features in high-quality unnormalized loglinear language models (Sethy et al., 2014). Ignoring normalization is particularly common in speech recognition and machine translation. This is one of our baselines. Unnormalized models can also be compiled into a single model by multiplying the weighted probabilities and backoffs.1 Many use unnormalized models because weights can be jointly tuned along with other feature weights. However, Haddow (2013) showed that linear interpolation weights can be jointly tuned by pairwise ranked optimization (Hopkins and May, 2011). In theory, normalized log-linear interpolation weights can be jointly tuned in the same way. 1Missing probabilities are found from the backoff algorithm and missing backoffs are implicitly one. Dynamic interpolation weights (Weintraub et al., 1996) give more weight to models familiar with a given query. Typically the weights are a function of the contexts that appear in the combined language model, which is compatible with our approach. However, normalizing factors would need to be calculated in each context. 3 Linear Interpolation To motivate log-linear interpolation, we examine two issues with linear interpolation: normalization when component models have different vocabularies and offline interpolation. 3.1 Vocabulary Differences Language models are normalized with respect to their vocabulary, including the unknown word. X x∈vocab(p1) p1(x) = 1 If two models have different vocabularies, then the combined vocabulary is larger and the sum is taken over more words. Component models assign their unknown word probability to these new words, leading to an interpolated model that sums to more than one. An example is shown in Table 1. p1 p2 pL Zero <unk> 0.4 0.2 0.3 0.3 A 0.6 0.4 0.3 B 0.8 0.6 0.4 Sum 1 1 1.3 1 Table 1: Linearly interpolating two models p1 and p2 with equal weight yields an unnormalized model pL. If gaps are filled with zeros instead, the model is normalized. To work around this problem, SRILM (Stolcke, 2002) uses zero probability instead of the unknown word probability for new words. This produces a model that sums to one, but differs from what users might expect. IRSTLM (Federico et al., 2008) asks the user to specify a common large vocabulary size. The unknown word probability is downweighted so that all models sum to one over the large vocabulary. A component model can also be renormalized with respect to a larger vocabulary. For unigrams, the extra mass is the number of new words times the unknown word probability. For longer contexts, if we assume the typical case where the 877 unknown word appears only as a unigram, then queries for new words will back off to unigrams. The total mass in context wn−1 1 is 1 + |new|p(<unk>) n−1 Y i=1 b(wn−1 i ) where new is the set of new words. This is efficient to compute online or offline. While there are tools to renormalize models, we are not aware of a tool that does this for linear interpolation. Log-linear interpolation is normalized by construction. Nonetheless, in our experiments we extend IRSTLM’s approach by training models with a common vocabulary size, rather than retrofitting it at query time. 3.2 Offline Linear Interpolation Given an interpolated model, offline interpolation seeks a combined model meeting three criteria: (i) encoding the same probability distribution, (ii) being a backoff model, and (iii) containing the union of n–grams from component models. Theorem 1. The three offline criteria cannot be satisfied for general linearly interpolated backoff models. Proof. By counterexample. Consider the models given in Table 2 interpolated with equal weight. p1 p2 pL p(A) 0.4 0.2 0.3 p(B) 0.3 0.3 0.3 p(C) 0.3 0.5 0.4 p(C | A) 0.4 0.8 0.6 b(A) 6 7 ≈0.857 0.4 2 3 ≈0.667 Table 2: Counterexample models. The probabilities shown for pL result from encoding the same distribution. Taking the union of n– grams implies that pL only has entries for A, B, C, and A C. Since the models have the same vocabulary, they are all normalized to one. p(A | A) + p(B | A) + p(C | A) = 1 Since all models have backoff structure, p(A)b(A) + p(B)b(A) + p(C | A) = 1 which when solved for backoff b(A) gives the values shown in Table 2. We then query pL(B | A) online and offline. Online interpolation yields pL(B | A) = 1 2p1(B | A) + 1 2p2(B | A) = 1 2p1(B)b1(A) + 1 2p2(B)b2(A) = 33 175 Offline interpolation yields pL(B | A) = pL(B)bL(A) = 0.2 ̸= 33 175 ≈0.189 The same problem happens with real language models. To understand why, we attempt to solve for the backoff bL(wn−1 1 ). Supposing wn 1 is not in either model, we query pL(wn | wn−1 1 ) offline pL(wn|wn−1 1 ) =pL(wn|wn−1 2 )bL(wn−1 1 ) =(λ1p1(wn|wn−1 2 ) + λ2p2(wn|wn−1 2 ))bL(wn−1 1 ) while online interpolation yields pL(wn|wn−1 1 ) =λ1p1(wn|wn−1 1 ) + λ2p2(wn|wn−1 1 ) =λ1p1(wn|wn−1 2 )b1(wn−1 1 ) + λ1p2(wn|wn−1 2 )b2(wn−1 1 ) Solving for bL(wn−1 1 ) we obtain λ1p1(wn|wn−1 2 )b1(wn−1 1 ) + λ2p2(wn|wn−1 2 )b2(wn−1 1 ) λ1p1(wn|wn−1 2 ) + λ2p2(wn|wn−1 2 ) which is a weighted average of the backoff weights b1(wn−1 1 ) and b2(wn−1 1 ). The weights depend on wn, so bL is no longer a function of wn−1 1 . In the SRILM approximation (Stolcke, 2002), probabilities for n–grams that exist in the model are computed exactly. The backoff weights are chosen to produce a model that sums to one. However, newer versions of SRILM (Stolcke et al., 2011) interpolate by ingesting one component model at a time. For example, the first two models are approximately interpolated before adding a third model. An n–gram appearing only in the third model will have an approximate probability. Therefore, the output depends on the order in which users specify models. Moreover, weights were optimized for correct linear interpolation, not the approximation. Stolcke (2002) find that the approximation actually decreases perplexity, which we also see in the experiments (§6). However, approximation only happens when the model backs off, which is less likely to happen in fluent sentences used for perplexity scoring. 878 4 Offline Log-Linear Interpolation Log-linearly interpolated backoff models pi can be collapsed into a single offline model pLL. The combined model takes the union of n–grams in component models.2 For those n–grams, it memorizes correct probability pLL. pLL(wn | wn−1 1 ) = Q i pi(wn | wn−1 1 )λi Z(wn−1 1 ) (1) When wn 1 does not appear, the backoff bLL(wn−1 1 ) modifies pLL(wn | wn−1 2 ) to make an appropriately normalized probability. To do so, it cancels out the shorter query’s normalization term Z(wn−1 2 ) then applies the correct term Z(wn−1 1 ). It also applies the component backoff terms. bLL(wn−1 1 ) = Z(wn−1 2 ) Z(wn−1 1 ) Y i bi(wn−1 1 )λi (2) Almost by construction, the model satisfies two of our criteria (§3.2): being a backoff model and containing the union of n–grams. However, backoff models require that the backoff weight of an unseen n–gram be implicitly 1. Lemma 1. If wn−1 1 is unseen in the combined model, then the backoff weight bLL(wn−1 1 ) = 1. Proof. Because we have taken the union of entries, wn−1 1 is unseen in component models. These components are backoff models, so implicitly bi(wn−1 1 ) = 1 ∀i. Focusing on the normalization term Z(wn−1 1 ), Z(wn−1 1 ) = X x Y i pi(x | wn−1 1 )λi = X x Y i pi(x | wn−1 2 )λibi(wn−1 1 )λi = X x Y i pi(x | wn−1 2 )λi = Z(wn−1 2 ) All of the models back off because wn−1 1 x is unseen, being a superstring of wn−1 1 . Relevant backoff weights bi(wn−1 1 ) = 1 as noted earlier. Recalling the definition of bLL(wn−1 1 ), Z(wn−1 2 ) Z(wn−1 1 ) Y i bi(wn−1 1 )λi = Z(wn−1 2 ) Z(wn−1 1 ) = 1 2We further assume that every substring of a seen n–gram is also seen. This follows from estimating on text, except in the case of adjusted count pruning by SRILM. In such cases, we add the missing entries to component models, with no additional memory cost in trie data structures. We now have a backoff model containing the union of n–grams. It remains to show that the offline model produces correct probabilities. Theorem 2. The proposed offline model agrees with online log-linear interpolation. Proof. By induction on the number of words backed off in offline interpolation. To disambiguate, we will use pon to refer to online interpolation and poff to refer to offline interpolation. Base case: the queried n–gram is in the offline model and we have memorized the online probability by construction. Inductive case: Let poff(wn | wn−1 1 ) be a query that backs off. In online interpolation, pon(wn | wn−1 1 ) = Q i pi(wn | wn−1 1 )λi Z(wn−1 1 ) Because wn 1 is unseen in the offline model and we took the union, it is unseen in every model pi. = Q i pi(wn | wn−1 2 )λibi(wn−1 1 )λi Z(wn−1 1 ) = Q i pi(wn | wn−1 2 )λi Q i bi(wn−1 1 )λi Z(wn−1 1 ) Recognizing the unnormalized probability Z(wn−1 2 )pon(wn | wn−1 2 ), = Z(wn−1 2 )pon(wn | wn−1 2 ) Q i bi(wn−1 1 )λi Z(wn−1 1 ) = pon(wn | wn−1 2 )Z(wn−1 2 ) Z(wn−1 1 ) Y i bi(wn−1 1 )λi = pon(wn | wn−1 2 )boff(wn−1 1 ) The last equality follows from the definition of boff and Lemma 1, which extended the domain of boff to any wn−1 1 . By the inductive hypothesis, pon(wn | wn−1 2 ) = poff(wn | wn−1 2 ) because it backs off one less time. = poff(wn | wn−1 2 )boff(wn−1 1 ) = poff(wn | wn−1 1 ) The offline model poff(wn | wn−1 1 ) backs off because that is the case we are considering. Combining our chain of equalities, pon(wn | wn−1 1 ) = poff(wn | wn−1 1 ) By induction, the claim holds for all wn 1 . 879 4.1 Normalizing Efficiently In order to build the offline model, the normalization factor Z needs to be computed in every seen context. To do so, we extend the tree-structure method of Whittaker and Klakow (2002), which they used to compute and cache normalization factors on the fly. It exploits the sparsity of language models: when summing over the vocabulary, most queries will back off. Formally, we define s(wn 1 ) to be the set of words x where pi(x | wn 1 ) does not back off in some model. s(wn 1 ) = {x : wn 1 x is seen in any model} To exploit this, we use the normalizing factor Z(wn 2 ) from a lower order and patch it up by summing over s(wn 1 ). Theorem 3. The normalization factors Z obey a recurrence relationship: Z(wn 1 ) = X x∈s(wn 1 ) Y i pi(x | wn 1 )λi+  Z(wn 2 ) − X x∈s(wn 1 ) Y i pi(x | wn 2 )λi  Y i bi(wn 1 )λi Proof. The first term handles seen n–grams while the second term handles unseen n–grams. The definition of Z Z(wn 1 ) = X x∈vocab Y i pi(x | wn 1 )λi can be partitioned by cases. X x∈s(wn 1 ) Y i pi(x | wn 1 )λi+ X x̸∈s(wn 1 ) Y i pi(x | wn 1 )λi The first term agrees with the claim, so we focus on the case where x ̸∈s(wn 1 ). By definition of s, all models back off. X x̸∈s(wn 1 ) Y i pi(x | wn 1 )λi = X x̸∈s(wn 1 ) Y i pi(x | wn 2 )λibi(wn 1 )λi =  X x̸∈s(wn 1 ) Y i pi(x | wn 2 )λi  Y i bi(wn 1 )λi =  Z(wn 2 ) − X x∈s(wn 1 ) Y i pi(x | wn 2 )λi  Y i bi(wn 1 )λi This is the second term of the claim. LM1 LM2 ... LMℓ Merge probabilities (§4.2.1) Apply Backoffs (§4.2.2) Normalize (§4.2.3) Output (§4.2.4) Context sort * wn 1 , m(wn 1 ), + Q i pi(wn|wn−1 mi(wn 1 ))λi), Q i pi(wn|wn−1 mi(wn 2 ))λi) In context order * wn 1 , Q i bi(wn−1 1 )λi, + Q i pi(wn | wn−1 1 )λi, Q i pi(wn | wn−1 2 )λi In suffix order bLL(wn 1 ) Suffix sort * wn 1 , pLL(wn|wn−1 1 ) + Figure 1: Multi-stage streaming pipeline for offline log-linear interpolation. Bold arrows indicate sorting is performed. The recurrence structure of the normalization factors suggests a computational strategy: compute Z(ϵ) by summing over the unigrams, Z(wn) by summing over bigrams wnx, Z(wn n−1) by summing over trigrams wn n−1x, and so on. 4.2 Streaming Computation Part of the point of offline interpolation is that there may not be enough RAM to fit all the component models. Moreover, with compression techniques that rely on immutable models (Whittaker and Raj, 2001; Talbot and Osborne, 2007), a mutable version of the combined model may not fit in RAM. Instead, we construct the offline model with disk-based streaming algorithms, using the framework we designed for language model estimation (Heafield et al., 2013). Our pipeline (Figure 1) has four conceptual steps: merge probabilities, apply backoffs, normalize, and output. Applying backoffs and normalization are performed in the same pass, so there are three total passes. 4.2.1 Merge Probabilities This step takes the union of n–grams and multiplies probabilities from component models. We 880 assume that the component models are sorted in suffix order (Figure 4), which is true of models produced by lmplz (Heafield et al., 2013) or stored in a reverse trie. Moreover, despite having different word indices, the models are consistently sorted using the string word, or a hash thereof. 3 2 1 A A A A A A B A A B Table 3: Merging probabilities processes n–grams in lexicographic order by suffix. Column headings indicate precedence. The algorithm processes n–grams in lexicographic (depth-first) order by suffix (Table 3). In this way, the algorithm processes pi(A) before it might be used as a backoff point for pi(A | B) in one of the models. It jointly streams through all models, so that p1(A | B) and p2(A | B) are available at the same time. Ideally, we would compute unnormalized probabilities. Y i pi(wn | wn−1 1 )λi However, these queries back off when models contain different n–grams. The appropriate backoff weights bi(wn−1 1 ) are not available in a streaming fashion. Instead, we proceed without charging backoffs Y i pi(wn | wn−1 mi(wn 1 ))λi where mi(wn 1 ) records what backoffs should be charged later. The normalization step (§4.2.3) also uses lowerorder probabilities Y i pi(wn | wn−1 2 )λi and needs to access them in a streaming fashion, so we also output Y i pi(wn | wn−1 mi(wn 2 ))λi Suffix 3 2 1 Z B A Z A B B B B Context 2 1 3 Z A B B B B Z B A Table 4: Sorting orders (Heafield et al., 2013). In suffix order, the last word is primary. In context order, the penultimate word is primary. Column headings indicate precedence. Each output tuple has the form * wn 1 , m(wn 1 ), Y i pi(wn|wn−1 mi(wn 1 ))λi, Y i pi(wn|wn−1 mi(wn 2 ))λi + where m(wn 1 ) is a vector of backoff requests, from which m(wn 2 ) can be computed. 4.2.2 Apply Backoffs This step fulfills the backoff requests from merging probabilities. The merged probabilities are sorted in context order (Table 4) so that n– grams wn 1 sharing the same context wn−1 1 are consecutive. Moreover, contexts wn−1 1 appear in suffix order. We use this property to stream through the component models again in their native suffix order, this time reading backoff weights bi(wn−1 1 ), bi(wn−1 2 ), . . . , bi(wn−1). Multiplying the appropriate backoff weights by Q i pi(wn|wn−1 mi(wn 1 ))λi yields unnormalized probability Y i pi(wn|wn−1 1 )λi The same applies to the lower order. Y i pi(wn|wn−1 2 )λi This step also merges backoffs from component models, with output still in context order. * wn 1 , Y i bi(wn−1 1 )λi, Y i pi(wn|wn−1 1 )λi Y i pi(wn|wn−1 2 )λi + The implementation is combined with normalization, so the tuple is only conceptual. 881 4.2.3 Normalize This step computes normalization factor Z for all contexts, which it applies to produce pLL and bLL. Recalling §4.1, Z(wn−1 1 ) is efficient to compute in a batch process by processing suffixes Z(ϵ), Z(wn), . . . Z(wn−1 2 ) first. In order to minimize memory consumption, we chose to evaluate the contexts in depth-first order by suffix, so that Z(A) is computed immediately before it is needed to compute Z(A A) and forgotten at Z(B). Computing Z(wn−1 1 ) by applying Theorem 3 requires the sum X x∈s(wn−1 1 ) Y i pi(x | wn−1 1 )λi where s(wn−1 1 ) restricts to seen n–grams. For this, we stream through the output of the apply backoffs step in context order, which makes the various values of x consecutive. Theorem 3 also requires a sum over the lower-order unnormalized probabilities X x∈s(wn−1 1 ) Y i pi(x | wn−1 2 )λi We placed these terms in the input tuple for wn−1 1 x. Otherwise, it would be hard to access these values while streaming in context order. While we have shown how to compute Z(wn−1 1 ), we still need to normalize the probabilities. Unfortunately, Z(wn−1 1 ) is only known after streaming through all records of the form wn−1 1 x, which are the very same records to normalize. We therefore buffer up to the vocabulary size for each order in memory to allow rewinding. Processing context wn−1 1 thus yields normalized probabilities pLL(x | wn−1 1 ) for all seen wn−1 1 x. D wn 1 , pLL(x | wn−1 1 ) E These records are generated in context order, the same order as the input. The normalization step also computes backoffs. bLL(wn−1 1 ) = Z(wn−1 2 ) Z(wn−1 1 ) Y i bi(wn−1 1 )λi Normalization Z(wn−1 1 ) is computed by this step, numerator Z(wn−1 2 ) is available due to depth-first search, and the backoff terms Q i bi(wn−1 1 )λi are present in the input. The backoffs bLL are generated in suffix order, since each context produces a backoff value. These are written to a sidechannel stream as bare values without keys. 4.2.4 Output Language model toolkits store probability pLL(wn | wn−1 1 ) and backoff bLL(wn 1 ) together as values for the key wn 1 . To reunify them, we sort ⟨wn 1 , pLL(wn | wn−1 1 )⟩in suffix order and merge with the backoff sidechannel from normalization, which is already in suffix order. Suffix order is also preferable because toolkits can easily build a reverse trie data structure. 5 Tuning Weights are tuned to maximize the log probability of held-out data. This is a convex optimization problem (Klakow, 1998). Iterations are expensive due to the need to normalize over the vocabulary at least once. However, the number of weights is small, which makes the Hessian matrix cheap to store and invert. We therefore selected Newton’s method.3 The log probability of tuning data w is log Y n pLL(wn | wn−1 1 ) which expands according to the definition of pLL X n X i λi log pi(wn | wn−1 1 ) ! −log Z(wn−1 1 ) The gradient with respect to λi has a compact form X n log pi(wn | wn−1 1 ) + CH(pLL, pi | wn−1 1 ) where CH is cross entropy. However, computing the cross entropy directly would entail a sum over the vocabulary for every word in the tuning data. Instead, we apply Theorem 3 to express Z(wn−1 1 ) in terms of Z(wn−1 2 ) before taking the derivative. This allows us to perform the same depth-first computation as before (§4.2.3), only this time ∂ ∂λi Z(wn−1 1 ) is computed in terms of ∂ ∂λi Z(wn−1 2 ). The same argument applies when taking the Hessian with respect to λi and λj. Rather than compute it directly in the form X n − X x pLL(x|wn−1 1 ) log pi(x|wn−1 1 ) log pj(x|wn−1 1 ) + CH(pLL, pi | wn−1 1 )CH(pLL, pj | wn−1 1 ) we apply Theorem 3 to compute the Hessian for wn 1 in terms of the Hessian for wn 2 . 3We also considered minibatches, though grouping tuning data to reduce normalization cost would introduce bias. 882 6 Experiments We perform experiments for perplexity, query speed, memory consumption, and effectiveness in a machine translation system. Individual language models were trained on English corpora from the WMT 2016 news translation shared task (Bojar et al., 2016). This includes the seven newswires (afp, apw, cna, ltw, nyt, wpb, xin) from English Gigaword Fifth Edition (Parker et al., 2011); the 2007–2015 news crawls;4 News discussion; News commmentary v11; English from Europarl v8 (Koehn, 2005); the English side of the French-English parallel corpus (Bojar et al., 2013); and the English side of SETIMES2 (Tiedemann, 2009). We additionally built one language model trained on the concatenation of all of the above corpora. All corpora were preprocessed using the standard Moses (Koehn et al., 2007) scripts to perform normalization, tokenization, and truecasing. To prevent SRILM from running out of RAM, we excluded the large monolingual CommonCrawl data, but included English from the parallel CommonCrawl data. All language models are 5-gram backoff language models trained with modified Kneser-Ney smoothing (Chen and Goodman, 1998) using lmplz (Heafield et al., 2013). Also to prevent SRILM from running out of RAM, we pruned singleton trigrams and above. For linear interpolation, we tuned weights using IRSTLM. To work around SRILM’s limitation of ten models, we interpolated the first ten then carried the combined model and added nine more component models, repeating this last step as necessary. Weights were normalized within batches to achieve the correct final weighting. This simply extends the way SRILM internally carries a combined model and adds one model at a time. 6.1 Perplexity experiments We experiment with two domains: TED talks, which is out of domain, and news, which is indomain for some corpora. For TED, we tuned on the IWSLT 2010 English dev set and test on the 2010 test set. For news, we tuned on the English side of the WMT 2015 Russian–English evaluation set and test on the WMT 2014 Russian– English evaluation set. To measure generalization, we also evaluated news on models tuned for TED and vice-versa. Results are shown in Table 5. 4For News Crawl 2014, we used version 2. Component Models Component TED test News test Gigaword afp 163.48 221.57 Gigaword apw 140.65 206.85 Gigaword cna 299.93 448.56 Gigaword ltw 106.28 243.35 Gigaword nyt 97.21 211.75 Gigaword wpb 151.81 341.48 Gigaword xin 204.60 246.32 News 07 127.66 243.53 News 08 112.48 202.86 News 09 111.43 197.32 News 10 114.40 209.31 News 11 107.69 187.65 News 12 105.74 180.28 News 13 104.09 155.89 News 14 v2 101.85 139.94 News 15 101.13 141.13 News discussion 99.88 249.63 News commentary v11 236.23 495.27 Europarl v8 268.41 574.74 CommonCrawl fr-en.en 149.10 343.20 SETIMES2 ro-en.en 331.37 521.19 All concatenated 80.69 96.15 TED weights Interpolation TED test News test Offline linear 75.91 100.43 Online linear 76.93 152.37 Log-linear 72.58 112.31 News weights Interpolation TED test News test Offline linear 83.34 107.69 Online linear 83.94 110.95 Log-linear 89.62 124.63 Table 5: Test set perplexities. In the middle table, weights are optimized for TED and include a model trained on all concatenated text. In the bottom table, weights are optimized for news and exclude the model trained on all concatenated text. 883 LM Tuning Compiling Querying All concatenated N/A N/A N/A N/A 0.186µs 46.7G Offline linear 0.876m 60.2G 641m 123G 0.186µs 46.8G Online linear 0.876m 60.2G N/A N/A 5.67µs 89.1G Log-linear 600m 63.9G 89.8m 63.9G 0.186µs 46.8G Table 6: Speed and memory consumption of LM combination methods. Interpolated models include the concatenated model. Tuning and compiling times are in minutes, memory consumption in gigabytes, and query time in microseconds per query (on 1G of held-out Common Crawl monolingual data). Log-linear interpolation performs better on TED (72.58 perplexity versus 75.91 for offline linear interpolation). However, it performs worse on news. In future work, we plan to investigate whether log-linear wins when all corpora are outof-domain since it favors agreement by all models. Table 6 compares the speed and memory performance of the competing methods. While the log-linear tuning is much slower, its compilation is faster compared to the offline linear model’s long run time. Since the model formats are the same for the concatenation and log-linear, they share the fastest query speeds. Query speed was measured using KenLM’s (Heafield, 2011) faster probing data structure.5 6.2 MT experiments We trained a statistical phrase-based machine translation system for Romanian-English on the Romanian-English parallel corpora released as part of the 2016 WMT news translation shared task. We trained three variants of this MT system. The first used a single language model trained on the concatenation of the 21 individual LM training corpora. The second used 22 language models, with each LM presented to Moses as a separate feature. The third used a single language model which is an interpolation of all 22 models. This variant was run with offline linear, online linear, and log-linear interpolation. All MT system variants were optimized using IWSLT 2011 Romanian-English TED test as the development set, and were evaluated using the IWSLT 2012 Romanian-English TED test set. As shown in Table 7, no significant difference in MT quality as measured by BLEU was observed; the best BLEU score came from separate features at 18.40 while log-linear scored 18.15. We leave 5KenLM does not natively implement online linear interpolation, so we wrote a custom wrapper, which is faster than querying IRSTLM. LM BLEU BLEU-c 22 separate LMs 18.40 17.91 All concatenated 18.02 17.55 Offline linear 18.00 17.53 Online linear 18.27 17.82 Log-linear 18.15 17.70 Table 7: Machine translation performance comparison in an end-to-end system. jointly tuned normalized log-linear interpolation to future work. 7 Conclusion Normalized log-linear interpolation is now a tractable alternative to linear interpolation for backoff language models. Contrary to Hsu (2007), we proved that these models can be exactly collapsed into a single backoff language model. This solves the query speed problem. Empirically, compiling the log-linear model is faster than SRILM can collapse its approximate offline linear model. In future work, we plan to improve performace of feature weight tuning and investigate more general features. Acknowledgments Thanks to Jo˜ao Sedoc, Grant Erdmann, Jeremy Gwinnup, Marcin Junczys-Dowmunt, Chris Dyer, Jon Clark, and MT Marathon attendees for discussions. Partial funding was provided by the Amazon Academic Research Awards program. This material is based upon work supported by the NSF GRFP under Grant Number DGE-1144245. References Jacob Andreas and Dan Klein. 2015. When and why are log-linear models self-normalizing? In NAACL 2015. 884 Ondˇrej Bojar, Christian Buck, Chris Callison-Burch, Christian Federmann, Barry Haddow, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2013. Findings of the 2013 workshop on statistical machine translation. In Proceedings of the Eighth Workshop on Statistical Machine Translation, pages 1–44, Sofia, Bulgaria, August. Association for Computational Linguistics. Ondˇrej Bojar, Christian Buck, Rajen Chatterjee, Christian Federmann, Liane Guillou, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Aur´elie N´ev´eol, Mariana Neves, Pavel Pecina, Martin Popel, Philipp Koehn, Christof Monz, Matteo Negri, Matt Post, Lucia Specia, Karin Verspoor, J¨org Tiedemann, and Marco Turchi. 2016. Findings of the 2016 Conference on Machine Translation. In Proceedings of the First Conference on Machine Translation (WMT’16), Berlin, Germany, August. Stanley Chen and Joshua Goodman. 1998. An empirical study of smoothing techniques for language modeling. Technical Report TR-10-98, Harvard University, August. Stanley F. Chen, Kristie Seymore, and Ronald Rosenfeld. 1998. Topic adaptation for language modeling using unnormalized exponential models. In Acoustics, Speech and Signal Processing, 1998. Proceedings of the 1998 IEEE International Conference on, volume 2, pages 681–684. IEEE. Marcello Federico, Nicola Bertoldi, and Mauro Cettolo. 2008. IRSTLM: an open source toolkit for handling large scale language models. In Proceedings of Interspeech, Brisbane, Australia. Alexander Gutkin. 2000. Log-linear interpolation of language models. Master’s thesis, University of Cambridge, November. Barry Haddow. 2013. Applying pairwise ranked optimisation to improve the interpolation of translation models. In Proceedings of NAACL. Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn. 2013. Scalable modified Kneser-Ney language model estimation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, Sofia, Bulgaria, August. Kenneth Heafield. 2011. KenLM: Faster and smaller language model queries. In Proceedings of the Sixth Workshop on Statistical Machine Translation, Edinburgh, UK, July. Association for Computational Linguistics. Mark Hopkins and Jonathan May. 2011. Tuning as ranking. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1352—-1362, Edinburgh, Scotland, July. Bo-June Hsu and James Glass. 2008. Iterative language model estimation: Efficient data structure & algorithms. In Proceedings of Interspeech, Brisbane, Australia. Bo-June Hsu. 2007. Generalized linear interpolation of language models. In Automatic Speech Recognition & Understanding, 2007. ASRU. IEEE Workshop on, pages 136–140. IEEE. Frederick Jelinek and Robert L. Mercer. 1980. Interpolated estimation of Markov source parameters from sparse data. In Proceedings of the Workshop on Pattern Recognition in Practice, pages 381–397, May. Slava Katz. 1987. Estimation of probabilities from sparse data for the language model component of a speech recognizer. IEEE Transactions on Acoustics, Speech, and Signal Processing, ASSP-35(3):400– 401, March. Dietrich Klakow. 1998. Log-linear interpolation of language models. In Proceedings of 5th International Conference on Spoken Language Processing, pages 1695–1699, Sydney, November. Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pages 181–184. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Annual Meeting of the Association for Computational Linguistics (ACL), Prague, Czech Republic, June. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proceedings of MT Summit. Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2011. English gigaword fifth edition, June. LDC2011T07. Abhinav Sethy, Stanley Chen, Bhuvana Ramabhadran, and Paul Vozila. 2014. Static interpolation of exponential n–gram models using features of features. In 2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP). Andreas Stolcke, Jing Zheng, Wen Wang, and Victor Abrash. 2011. SRILM at sixteen: Update and outlook. In Proc. 2011 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU), Waikoloa, Hawaii, USA. Andreas Stolcke. 2002. SRILM - an extensible language modeling toolkit. In Proceedings of the Seventh International Conference on Spoken Language Processing, pages 901–904. David Talbot and Miles Osborne. 2007. Randomised language modelling for statistical machine translation. In Proceedings of ACL, pages 512–519, Prague, Czech Republic. 885 J¨org Tiedemann. 2009. News from OPUS - A collection of multilingual parallel corpora with tools and interfaces. In N. Nicolov, K. Bontcheva, G. Angelova, and R. Mitkov, editors, Recent Advances in Natural Language Processing, volume V, pages 237–248. John Benjamins, Amsterdam/Philadelphia, Borovets, Bulgaria. Ashish Vaswani, Yinggong Zhao, Victoria Fossum, and David Chiang. 2013. Decoding with large-scale neural language models improves translation. In Proceedings of EMNLP. Mitch Weintraub, Yaman Aksu, Satya Dharanipragada, Sanjeev Khudanpur, Hermann Ney, John Prange, Andreas Stolcke, Fred Jelinek, and Liz Shriberg. 1996. LM95 project report: Fast training and portability. Research Note 1, Center for Language and Speech Processing, Johns Hopkins University, February. Edward D. W. Whittaker and Dietrich Klakow. 2002. Efficient construction of long-range language models using log-linear interpolation. In 7th International Conference on Spoken Language Processing, pages 905–908. Edward Whittaker and Bhiksha Raj. 2001. Quantization-based language model compression. In Proceedings of Eurospeech, pages 33–36, Aalborg, Denmark, December. Hui Zhang and David Chiang. 2014. Kneser-Ney smoothing on expected counts. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 765–774. ACL. 886
2016
83
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 887–896, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics How Well Do Computers Solve Math Word Problems? Large-Scale Dataset Construction and Evaluation Danqing Huang1∗, Shuming Shi2, Chin-Yew Lin2, Jian Yin1 and Wei-Ying Ma2 1 Sun Yat-sen University {huangdq2@mail2,issjyin@mail}.sysu.edu.cn 2 Microsoft Research {shumings,cyl,wyma}@microsoft.com Abstract Recently a few systems for automatically solving math word problems have reported promising results. However, the datasets used for evaluation have limitations in both scale and diversity. In this paper, we build a large-scale dataset which is more than 9 times the size of previous ones, and contains many more problem types. Problems in the dataset are semiautomatically obtained from community question-answering (CQA) web pages. A ranking SVM model is trained to automatically extract problem answers from the answer text provided by CQA users, which significantly reduces human annotation cost. Experiments conducted on the new dataset lead to interesting and surprising results. 1 Introduction Designing computer systems for automatically solving math word problems is a challenging research topic that dates back to the 1960s (Bobrow, 1964a; Briars and Larkin, 1984; Fletcher, 1985). As early proposals seldom report empirical evaluation results, it is unclear how well they perform. Recently, promising results have been reported on both statistical learning approaches (Kushman et al., 2014; Hosseini et al., 2014; Koncel-Kedziorski et al., 2015; Zhou et al., 2015; Roy and Roth, 2015) and semantic parsing methods (Shi et al., 2015). However, we observe two limitations on the datasets used by these previous works. First, the datasets are small. The most frequently used dataset (referred to as Alg514 hereafter) only contains 514 algebra problems. The Dolphin1878 ∗Work done while this author was an intern at Microsoft Research. dataset (Shi et al., 2015), the largest collection among them, contains 1878 problems. Second, the diversity of problems in the datasets is low. The Alg514 collection contains linear algebra problems of 28 types (determined by 28 unique equation systems), with each problem type corresponding to at least 6 problems. Although the Dolphin1878 collection has over 1,000 problem types, only number word problems (i.e., math word problems about the operations and relationship of numbers) are contained in the collection. Due to the above two limitations, observations and conclusions based on existing datasets may not be representative. Therefore it is hard to give a convincing answer to the following question: How well do state-of-the-art computer algorithms perform in solving math word problems? To answer this question, we need to re-evaluate state-of-the-art approaches on a larger and more diverse data set. It is not hard to collect a large set of problems from the web. The real challenge comes from attaching annotations to the problems. Important annotation types include equation systems (required by most statistical learning methods for model training) and gold answers (for testing algorithm performance). Manually adding equation systems and gold answers is extremely time-consuming1. In this paper, we build a large-scale and diverse dataset called Dolphin18K 2, which contains over 18,000 annotated math word problems. It is constructed by semi-automatically extracting problems, equation systems and answers from community question-answering (CQA) web pages. The source data we leverage are the (question, answer text) pairs in the math category of Yahoo! An1According to our experience, the speed is about 10-15 problems per hour for a person with good math skills. 2Available from http://research.microsoft.com/enus/projects/dolphin/. 887 swers3. Please note that the answer text provided by CQA users cannot be used directly in evaluation as gold answers, because answer numbers and other numbers are often mixed together in answer text (refer to Figure 1 of Section 3). We train a ranking SVM model to identify (structured) problem answers from unstructured answer text. We then conduct experiments to test the performance of some recent math problem solving systems on the dataset. We make the following main observations, 1. All systems evaluated on the Dolphin18K dataset perform much worse than on their original small and less diverse datasets. 2. On the large dataset, a simple similaritybased method performs as well as more sophisticated statistical learning approaches. 3. System performance improves sub-linearly as more training data is used. This suggests that we need to develop algorithms which can utilize data more effectively. Our experiments indicate that the problem of automatic math word problem solving is still far from being solved. Good results obtained on small datasets may not be good indicators of high performance on larger and diverse datasets. For current methods, simply adding more training data is not an effective way to improve performance. New methodologies are required for this topic. 2 Related Work 2.1 Math Word Problem Solving Previous work on automatic math word problem solving falls into two categories: symbolic approaches and statistical learning methods. In symbolic approaches (Bobrow, 1964a; Bobrow, 1964b; Charniak, 1968; Charniak, 1969; Bakman, 2007; Liguda and Pfeiffer, 2012; Shi et al., 2015), math problem sentences are transformed to certain structures (usually trees) by pattern matching, verb categorization, or semantic parsing. Math equations are then derived from the structured representation. Addition/subtraction problems are studied most in early research (Briars and Larkin, 1984; Fletcher, 1985; Dellarosa, 1986; Bakman, 2007; Yuhui et al., 2010). Please refer to Mukherjee and Garain (2008) for a review of symbolic approaches before 2008. 3https://answers.yahoo.com/ Statistical machine learning methods have been proposed to solve math word problems since 2014. Hosseini et al. (2014) solve single step or multistep homogeneous addition and subtraction problems by learning verb categories from the training data. Kushman et al. (2014) and Zhou et al. (2015) solve a wide range of algebra word problems, given that systems of linear equations are attached to problems in the training set. Seo et al. (2015) focuses on SAT geometry questions with text and diagram provided. Koncel-Kedziorski et al. (2015) and Roy and Roth (2015) target math problems that can be solved by one single linear equation. No empirical evaluation results are reported in most early publications on this topic. Although promising empirical results are reported in recent work, the datasets employed in their evaluation are small and lack diversity. For example, the Alg514 dataset used in Kushman et al. (2014) and Zhou et al. (2015) only contains 514 problems of 28 types. Please refer to Section 3.4 for more details about the datasets. Recently, a framework was presented in Koncel-Kedziorsk et al. (2016) for building an online repository of math word problems. The framework is initialized by including previous public available datasets. The largest dataset among them contains 1,155 problems. 2.2 Answer Extraction in CQA Our work on automatic answer and equation extraction is related to the recent CQA extraction work (Agichtein et al., 2008; Cong et al., 2008; Ding et al., 2008). Most of them aim to discover high-quality (question, answer text) pairs from CQA posts. We are different because we extract structured data (i.e., numbers and equation systems) inside the pieces of answer text. 3 Dataset Construction Our goal is to construct a large and diverse problem collection of elementary mathematics (i.e., math topics frequently taught at the primary or secondary school levels). We build our dataset by automatically extracting problems and their annotations from the mathematics category of the Yahoo! Answers web site. A math problem post on Yahoo! Answers consists of the raw problem text and one or multiple pieces of answer text provided by its answerers (refer to Figure 1). 888 Please note that posts cannot be used directly as our dataset entries. For example, for training statistical models, we have to extract equation systems from the unstructured text of user answers. We also need to extract numbers (56,000 and 21,000 in Figure 1) from the raw answer text as gold answers. We perform the following actions to the posts, • Removing the posts that do not contain a math problem of our scope (Section 3.1) • Cleaning problem text (Section 3.1) • Extracting gold answers (Section 3.2) • Extracting equation systems (Section 3.3) In Section 3.4, we report some statistics of our dataset and compare them with previous ones. 3.1 Preprocessing We crawl over one million posts from the mathematics categories of Yahoo! Answers. They are part of the posts submitted and answered by users since year 2008. By examining some examples, we soon find that many of them do not contain math problems of our scope. We discard or ignore the posts with the following types: 1. Containing a general math-related question but not a typical math problem. For example, “Can anyone teach me how to set up two equations for one problem, and then how to solve it after?”. 2. College-level math 3. Containing multiple math problems in a single post. They are discarded for simplifying our answer and equation system extraction process. As the size of a set of one million problems is large for human annotation and many of them belong to the above three types, we need a way to automatically filter out undesired problems. We manually annotate 6,000 posts with the speed of about 150 posts per hour per person. Then a logistic regression classifier of posts is trained with a precision of 80% and a recall of 70%. The post collection after classification is reduced to 120,000 posts. Then we randomly sample 46,000 posts from the reduced post collection to perform two actions manually: post classification and problem Question part: Son’s 6th grade math? The number of cans produced in one day by two companies A and B were in ratio 8:3 and their difference was 35,000. How many cans did each company produce that day? Answer part: Answer 1: Let can produced by 1 company be 8x and the other 3x. so 8x - 3x = 35000. 5x = 35000. x = 7000. So the first company produced 8 x 7000 = 56,000 cans, and the other produced 3 x 7000 = 21,000 cans. Answer 2: From the ratio: 3A=8B or A=(8/3)B. From the difference: A-B=35000. By substituting for A, we get (8/3)BB=35000 and further to B = 21000. From the difference: A = 21000+35000=56000. Answer 3: It’s 56000 and 21000. Answer 4: what the hell thats not 6th grade math!!! Figure 1: An example post from Yahoo! answers text cleaning. Please note that, since the precision of the automatic classifier is only 80%, we rely on manual classification to remove the remaining 20% undesired posts. Problem text cleaning is for removing sentences like “please help” and “Son’s 6th grade math” (refer to Figure 1). The problem text after cleaning is just like that appearing in a formal math test in an elementary or secondary school. Eight annotators participated in the manual post classification and problem text cleaning, at an average speed of about 80 posts per hour per person. 3.2 Automatic Answer Extraction Compared to post classification and problem text cleaning, it is much more time consuming to manually assign gold answers and equation systems to a problem (10-15 problems per hour per person vs. 80 posts per hour per person). In addition, the latter has higher requirements of the math skills of annotators. Since manually annotating all problems exceeds our budget, we choose to train a high precision model to automatically extract numbers as gold answers from the answer part of a post. In our dataset, the gold answer to a problem is one or a set of numbers acting as the solution to the problem. We define answer dimension as the 889 count of numbers required in the gold answer. For example, the gold answer to the problem in Figure 1 is {56000, 21000}, with dimension 2. Extracting gold answers from the answer part of a post is nontrivial. We tried an intuitive approach called last-number-majority-voting, where the last number in each answer of the post is chosen as a candidate and then majority voting is performed among all the candidates. We got a low accuracy of 47% on our annotated data. Thus, we turn to a machine learning model for better utilizing more features in the posts. Notations: Let χ denote the set of training problems. For each problem xi in χ, Nij = {n1 ij, n2 ij, . . . , nm ij } denote the set of all unique numbers given the jth answer, where m represents the size of Nij. For each Nij, we generate possible subsets of numbers as candidate answers to the problems. Please pay attention that the gold answer to a problem may contain multiple numbers (in the case that the answer dimension is larger than 1). We use Yi to denote all the candidate answers in problem xi. Model: We define the conditional probability of yik ∈Yi given xi: p(yik|xi; ν) = exp(ν · f(xi, yik)) P y′ ik∈Yi exp(ν · f(xi, y′ ik)) where ν is a parameter vector of the model and f(xi, yik) is the feature vector. We apply the Ranking SVM (Herbrich et al., 2000) to maximize the margin between the correct instances and the negative ones. Constructing the SVM model is equivalent to solving the following Quadratic Optimization problem: min ν M(ν) = 1 2∥ν∥2 + C X i ξi s.t. ξi ≥0, ν · ⟨f(xi, yik)+ −f(xi, yil)−⟩≥1 −ξi where subscript “+” indicates the correct instance and “-” indicates the false ones. Features: Features are extracted from the answer part of each post for model training. We design features based on the following observations. In Yahoo! answers, users tend to write down correct answers at the beginning of the answer text, or at the end after providing the solving procedure. Surrounding words also give hints for finding correct solutions. For example, numbers that are close to the word “answer” are more likely to be in the gold answer. Given a post, numbers appearing in the answer text of more users are more likely to be the correct solution. Some words in the question sentence help determine answer dimension. For example, “How far does Tom run?” requires a one-dimension answer while “How much do they each earn?” indicates multiple dimensions. Main features are listed in table 1. Table 1: Features for automatic answer extraction Local context features Relative position in the procedure On right side of the symbol “=”? On left side of the symbol “=”? Close to “ans”, “answer”, “result”, or “therefore” Global features Is in the text of the first answer (the first answer is often marked as the best answer in Yahoo! answers)? Is in problem text? Frequency in the text of all answers for this problem Frequency in the first position of all answers Frequency in the last position of all answers Number value features Is positive? Is an integer? Its value is between 0 to 1? Equals to the predicted solution in automatic equation extraction? Number set features Are numbers at same line of answer text? Are numbers at consecutive lines of answer text? Frequency of the numbers at same line in all answers Frequency of the numbers at consecutive lines in all answers Dimension features Has singular verb in question? Has plural noun in question? Has special words (e.g., and, both, each, all) in question? Inference: After we train the model to get parameter vector ν, the predicated gold answer is selected from the candidate number subsets by maximizing ν·f(xi, yik). Formally, the predicated gold 890 answer is, arg max yik∈Yi ν · f(xi, yik) About 3,000 problems are manually annotated with answers and equations by the human annotators we hire. Then we train and evaluate our model using 5-fold cross validation. The extractor’s performance is shown in Figure 2. To preserve an accuracy rate of 90%, we use score = 3 as the threshold and only keep problems with predicted confidence score >= 3. Please note that precision is more important than recall in our scenario. We need to guarantee that most extracted answers are correct. Lower recall can be tackled by processing more posts. 0 1 2 3 4 5 6 0 0.2 0.4 0.6 0.8 1 confidence score accuracy Precision Recall Figure 2: Accuracy of answer extraction 3.3 Automatic Equation Annotation Now we illustrate how to extract equation systems automatically from the unstructured answer text of a post. The input is the answer text of n answers: T = {T1, T2, . . . , Tn} For example in Figure 1, there are four answers, each corresponding to a piece of answer text Ti. The task is not easy, because variables and equations may not be in standard formats in answer text. In addition, equations may be duplicate (like those in Answer 1 of Figure 1). Our algorithm is a two-phase procedure: Candidate extraction: We extract an equation system from each piece of answer text Ti. In processing Ti, we first extract a list of equations by regular expression matching. Then the equations are added to the equation system by the order of their occurrences in the text. Before adding an equation, we check whether it can be induced by the already-added equations. If so, we skip it. Duplicate equations are effectively reduced in this way. Voting by solution: We solve each equation system obtained from the first phase and build a (equation system, solution) bipartite graph. We then choose the equation system that has the maximum degree as our output. For example, if three equation systems return the solution {24} and the fourth returns {-1}, we will choose one from the first three equation systems. To improve precision, we do not return any equation system if the maximal degree is less than 2. We evaluate our equation extractor on 3,000 manually annotated problems 4. For an equation system extracted for a problem, we say it is correct if the annotated gold answer is a subset of the solutions to the equation system. For example, if the gold answer is {16, 34} and the solution to the equation system is {16, 34, 100}, then the equation system is considered correct. Evaluation results show a precision of 91.4% and a recall of 64.7%. 3.4 Datasets Summary Below are a list of previous benchmark datasets for math word problem solving. Alg514 is introduced in Kushman et al. (2014) and also used in Zhou et al. (2015) for evaluation. It consists of 514 algebra word problems from algebra.com5, with each problem annotated with linear equations. The template (explained later) of each problem has to appear at least six times in the whole set. Verb395 (Hosseini et al., 2014): A collection of addition/subtraction problems. Dolphin1878: A collection built by Shi et al. (2015), containing 1,878 number word problems obtained from algebra.com and Yahoo! answers. DRAW (Upadhyay and Chang, 2015): Containing 1,000 algebra word problems from algebra.com, each annotated with linear equations. SingleEQ: By Koncel-Kedziorski et al. (2015), containing 508 problems, each of which corresponds to one single equation. Before comparing the datasets, let’s first introduce the concept of equation system templates, which are first introduced in Kushman et al. (2014) 4the same set of problems as we used in training and evaluating answer extraction 5http://www.algebra.com 891 Table 2: Comparison of different datasets Dataset # Problems # Templates # Sentences # Words Problems types Verb395 395 3 1.13k 12.4k homogeneous addition or subtraction problems Alg514 514 28 1.62k 19.3k algebra, linear Dolphin1878 1,878 1,183 3.30k 41.4k number word problems DRAW 1,000 232 2.67k 35.3k algebra, linear SingleEQ 508 31 1.38k 13.8k single equation, linear Dolphin18K 18,460 5,871 49.9k 604k linear + nonlinear for math word problem solving. A template is a unique form of equation system. For example, the following is a template of two equations: n1 · x1 + x2 = n2 x1 + n3 · x2 = n4 The following equation system corresponds to the above template, 3 · x1 + x2 = 5 x1 + 7 · x2 = 15 Table 2 shows some statistical information of our dataset and previous ones. It can be seen that our dataset has a much larger scale (about 10 times the size of the Dolphin1878 collection and more than 17 times larger than the others) and higher diversity (in terms of both problem types and the number of templates contained). We split our dataset into a development set and an evaluation set. The development set is used for algorithm design and debugging, while the evaluation set is for training and testing. Any problem in the evaluation set should be invisible to the people who design an automatic math problem solving system. Statistics on our dataset are shown in Table 3, where dev and eval represent the development set and the evaluation set respectively. Most problems are assigned with both equation systems and gold answers. Some of them are annotated with answers only, either because annotators feel it is hard to do so, or because our equation extraction algorithm returns empty results. As most previous systems only handle linear equation systems, we summarize, in Table 4, the distribution of linear problems in the evaluation set by template size. In the table, the size of a template is defined as the number of problems corresponding to this template. Between the two numbers in each cell, the first one is the number of problems, Table 3: Annotation statistics for our dataset Equations Answer Sum + answer only Manual 909 67 976 dev Auto 2,245 507 2,752 All 3,154 574 3,728 Manual 3,605 321 3,926 eval Auto 8,754 2,052 10,806 All 12,359 2,373 14,732 and the second number (or the one in parentheses) is the number of templates in this category. For example, in the automatically annotated evaluation set, 166 templates have size 6 or larger. They correspond to 4,826 problems. 4 Experiments 4.1 Systems for evaluation We report the performance of several state-of-theart systems on our new dataset. KAZB: A template-based statistical learning method introduced in Kushman et al. (2014). It maps a problem to one equation template defined in the training set by reasoning across problem sentences. KAZB reports an accuracy of 68.7% on the Alg514 dataset. ZDC: Proposed in Zhou et al. (2015) as an improved version of KAZB. It reduces the search space by not modeling alignment between noun phrases and variables. It achieves an accuracy of 79.7% on Alg514. SIM is a simple similarity-based method implemented by us. To solve a problem, it calculates the lexical similarity between the problem and each problem in the training set. Then the equation system of the most similar problem is ap892 Table 4: Problem distribution by template size (for linear problems only) Template size Manual Auto All (all linear 2,675 7,969 10,644 templates) (876) (2,609) (3,158) >=2 2,036 5,956 8,229 (237) (596) (743) >=5 1,678 4,979 7,081 (98) (196) (254) >=6 1,578 4,826 6,827 (78) (166) (216) >=10 1,337 4,329 6,216 (43) (96) (130) >=20 1,039 3,673 5,392 (22) (48) (68) >=50 634 2,684 4,281 7 18 30 plied to the new problem. In a little more details, a test problem PT is solved in two steps: template selection, and template slot filling. In the first step, each problem is modeled as a vector of word TF-IDF scores. The similarity between two problems is calculated by the weighted Jaccard similarity between their corresponding vectors. We choose, from the training data, problem P1 that has the maximal similarity with PT and use the equation template T of P1 as the template of problem PT . In the second step, the numbers appearing in problem PT are mapped to the number slots of template T (which has been identified in the first step). The mapping is implemented by selecting one problem P2 from all the training problems corresponding to template T so that it has the minimum word-level edit-distance to PT . Then the number mapping of P2 is borrowed as the number mapping of PT . For example, for the following test problem, An overnight mail service charges $3.60 for the first six ounces and $0.45 for each additional ounce or fraction of an ounce. Find the number of ounces in a package that cost $7.65 to deliver. Assuming that a problem P1 has maximum Jaccard similarity with the above problem and its corresponding equation template is as follows, this template will be identified in the first step, n1 + n2 ∗(x −n3) = n4 Assume that P2 has the minimum edit-distance to PT among all the training problems corresponding to template T. Suppose the numbers in P2 are (by their order in the problem text), 3.5, 5, 0.5, 6.5 Also suppose P2 is annotated with the following equation system, 3.5 + 0.5 ∗(x −5) = 6.5 Then we will choose P2 and borrow its number mapping. So the mapping from numbers in the above test problem to template slots will be, 3.60/n1; 6/n3; 0.45/n2; 7.65/n4 In implementing SIM, we do not use any POS tagging or syntactic parsing features for similarity calculation. This method gets an accuracy of 71.2% on Alg514 and 49.0% on SingleEQ. Systems not included for evaluation: Although the system of Shi et al. (2015) achieves very high performance on number word problems, we do not include it in our evaluation because it is unknown how to extend it to other problem types. The system of Hosseini et al. (2014) is not included in our evaluation because it only handles homogeneous addition/subtraction problems. The systems of Koncel-Kedziorski et al. (2015) and Roy and Roth (2015) are also not included because so far they only supports problems with one single linear equation. 4.2 Overall Evaluation Results Table 5 shows the accuracy of various systems on different subsets of our dataset. In the table, Manual.Linear contains all the manually annotated problems with linear equation systems. It contains 2,675 problems and 876 templates (as shown in Table 4). Auto.LinearT6 (containing 4,826 problems) is the set of all the automatically annotated problems with a template size larger than or equal to 6. Similarly, LinearT2 means the subset of problems with template size ≥2. For each system on each subset, experiments are conducted using 5-fold cross-validation with 80% problems randomly selected as training data and the remaining 20% for test. In the table, “-” means that the system does not complete running on the dataset in three days. Since KAZB and ZDC only handle linear equation systems, they are not applicable to the datasets 893 Systems Dataset KAZB ZDC SIM Manual.Linear 10.7% 11.1% 13.3% Manual.LinearT2 12.8% 13.9% 17.3% Manual.LinearT6 17.6% 17.1% 18.8% Auto.Linear 17.2% 17.4% Auto.LinearT2 20.1% 19.2% Auto.LinearT6 19.2% 18.4% All.Linear 17.9% 18.4% All.LinearT2 20.6% 20.3% All.LinearT6 21.7% 20.2% All (Dolphin18K) n/a n/a 16.7% Alg514 68.7% 79.7% 71.2% Table 5: Overall evaluation results containing nonlinear problems. An “n/a” is filled in the corresponding cell in this case. The results show that all three systems (KAZB, ZDC, and SIM) have extremely low performance on our new datasets. Surprisingly, no system achieves an accuracy rate of over 25%. Such results indicate that automatic math word problem solving is still a very challenging task. Another surprising observation is that KAZB and ZDC do not perform better than SIM, a simple similarity-based method which runs much faster than the two statistical learning systems. By comparing the results obtained from the manual version of the datasets with their corresponding auto version (for example, Manuall.Linear vs. Auto.Linear), we can see larger accuracy scores on the auto versions 6. This demonstrates the usefulness of the automatically annotated data. Considering the huge cost of manually assigning equation systems and gold answers, automatic annotation has good potential in constructing larger datasets. 4.3 Why Different from Previous Results The last line of Table 5 displays the results on Alg514. All three systems perform well on Alg514 but poorly on Dolphin18K. To study the reason of such a large gap, we derive two small datasets from All.Linear by referring to the equation templates in Alg514. Small.01: The set of all problems in All.Linear that correspond to one of the 28 templates in Alg514. The dataset contains 2,021 problems. 6Please note that the auto versions are more than 2 times larger. Small.02: A subset of Small.01, constructed by randomly removing problems from Small.01 so that each template contains similar number of problems as in Alg514. In other words, Small.02 and Alg514 have similar problem distribution among templates. Small.01 Small.02 KAZB 29.9% 50.0% ZDC 30.1% 52.7% SIM 33.7% 43.0% Table 6: The case of fewer number of templates We still use 5-fold cross validation to test and compare system performance on the two small datasets. Evaluation results are displayed in Table 6. We now obtain higher accuracy scores for each system, but there is a big difference between the results on Small.01 and Small.02. As mentioned in (Upadhyay and Chang, 2015), Alg514 has a skewed problem distribution, with a few templates covering almost 50% problems. This may be the main reason why all three systems achieve high accuracy on this dataset and on Small.02. From all of the above results, we see at least two factors which affect system performance: number of templates in the dataset, and the distribution of problems among the templates. For a small dataset, the distribution of problems among templates have a huge impact on evaluation results. 4.4 Effect of Training Data Size Now we investigate the performance change of various systems when the size of training data changes. The goal is to check whether the accuracy increases quickly when more training data are added. This is important: If it is the case, we can tackle this task by simply adding more training data, either manually or automatically. Otherwise, we have to discover new approaches. We conduct experiments in two settings: fixedtest-set, and increasing-test-set. In the first setting, we randomly choose 1/2 of the problems from the Manual.Linear subset to form a fixed test-set (with size 1330). Then the other problems in All.Linear forms a candidate training collection (containing 9314 elements). We construct training sets of different scales by doing random sampling from the candidate training collection. In the second setting (i.e., increasing-test-set), we construct datasets (training set plus test set) of 894 Training data source All.LinearT6 All.Linear Training data size 138 434 1024 2940 5771 500 1000 2000 5000 9000 Test set size 1330 1330 1330 1330 1330 1330 1330 1330 1330 1330 KAZB accuracy (%) 6.7 7.2 7.1 8.3 ZDC accuracy (%) 6.1 7.5 8.6 11.4 12.6 5.5 9.2 10.5 12.5 13.1 SIM accuracy (%) 5.5 8.7 11.0 13.7 15.9 6.5 10.8 12.2 14.9 18.4 Table 7: System performance with different training data size (setting: fixed-test-set) Training data size 400 800 1600 4000 8516 Test set size 100 200 400 1000 2128 KAZB 5.4% 6.7% 11.7% ZDC 5.8% 7.6% 12.9% 17.0% 17.9% SIM 7.4% 10.0% 13.3% 16.9% 18.4% Table 8: System performance with different training data size (setting: increasing-test-set) different scales by doing random sampling from All.Linear, and then conduct 5-fold cross validation on each dataset. In each fold, 80% problems are chosen at random for training, and the other 20% for testing. The results in the two settings are reported in Tables 7 and 8 respectively. Both tables show that the accuracy of all the three systems improves steadily but slowly along with the increasing of training data size. So it is not very effective to improve accuracy by simply adding more training data. 4.5 Results Summary In summary, the following observations are made from the experiments on our new dataset. First, all systems evaluated on the Dolphin18K dataset perform much worse than on the small and less diverse datasets. Second, the two statistical learning methods do not perform better than a simple similarity-based method. Third, it seems not promising for the current methods to achieve much better results by simply adding more training data. Automatic math word problem solving is still a very challenging task so far. 5 Conclusion We have constructed Dolphin18K, a large dataset for training and evaluating automatic math word problem solving systems. The new dataset is almost one order of magnitude larger than most of previous ones, and has a much higher level of diversity in term of problem types. We reduce human annotation cost by automatically extracting gold answers and equation systems from the unstructured answer text of CQA posts. We have also conducted experiments on our dataset to evaluate state-of-the-art systems. Interesting and surprising observations are made from the experimental results. Acknowledgments We would like to thank the annotators for their efforts in annotating the math problems in our dataset. Thanks to the anonymous reviewers for their helpful comments and suggestions. References Eugene Agichtein, Carlos Castillo, Debora Donato, Aristides Gionis, and Gilad Mishne. 2008. Finding high-quality content in social media. In First ACM International Conference on Web Search and Data Mining (WSDM’08). Yefim Bakman. 2007. Robust understanding of word problems with extraneous information. http://arxiv.org/abs/math/0701393. Daniel G. Bobrow. 1964a. Natural language input for a computer problem solving system. Technical report, Cambridge, MA, USA. Daniel G. Bobrow. 1964b. Natural language input for a computer problem solving system. Ph.D. Thesis. Diane J. Briars and Jill H. Larkin. 1984. An integrated model of skill in solving elementary word problems. Cognition and Instruction, 1(3):245–296. Eugene Charniak. 1968. Carps, a program which solves calculus word problems. Technical report. 895 Eugene Charniak. 1969. Computer solution of calculus word problems. In Proceedings of the 1st International Joint Conference on Artificial Intelligence, pages 303–316. Gao Cong, Long Wang, Chin-Yew Lin, Young-In Song, and Yueheng Sun. 2008. Finding questionanswer pairs from online forums. In Proceedings of 31st International ACM-SIGIR Conference on Research and Development in Information Retrieval (SIGIR’08), pages 467–474. Denise Dellarosa. 1986. A computer simulation of children’s arithmetic word-problem solving. Behavior Research Methods, Instruments, & Computers, 18(2):147–154. Shilin Ding, Gao Cong, Chin-Yew Lin, and Xiaoyan Zhu. 2008. Using conditional random fields to extract context and answers of questions from online forums. In Proceedings of the 46th Annual Meeting of the ACL: HLT (ACL 2008), pages 710–718, Columbus, USA. Charles R. Fletcher. 1985. Understanding and solving arithmetic word problems: A computer simulation. Behavior Research Methods, Instruments, & Computers, 17(5):565–571. Ralf Herbrich, Thore Graepel, and Klaus Obermayer, 2000. Large Margin Rank Boundaries for Ordinal Regression, chapter 7, pages 115–132. Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. 2014. Learning to solve arithmetic word problems with verb categorization. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), October. Rik Koncel-Kedziorsk, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi. 2016. Mawps: A math word problem repository. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Dumas Ang. 2015. Parsing algebraic word problems into equations. Transactions of the Association for Computational Linguistics, 3:585–597. Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. 2014. Learning to automatically solve algebra word problems. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. Christian Liguda and Thies Pfeiffer. 2012. Modeling math word problems with augmented semantic networks. In Natural Language Processing and Information Systems. International Conference on Applications of Natural Language to Information Systems (NLDB-2012), pages 247–252. Anirban Mukherjee and Utpal Garain. 2008. A review of methods for automatic understanding of natural language mathematical problems. Artificial Intelligence Review, 29(2):93–122. Subhro Roy and Dan Roth. 2015. Solving general arithmetic word problems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1743–1752. The Association for Computational Linguistics. Minjoon Seo, Hannaneh Hajishirzi, Ali Farhadi, Oren Etzioni, and Clint Malcolm. 2015. Solving geometry problems: Combining text and diagram interpretation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Shuming Shi, Yuehui Wang, Chin-Yew Lin, Xiaojiang Liu, and Yong Rui. 2015. Automatically solving number word problems by semantic parsing and reasoning. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Shyam Upadhyay and Ming-Wei Chang. 2015. Draw: A challenging and diverse algebra word problem set. Number MSR-TR-2015-78, October. Ma Yuhui, Zhou Ying, Cui Guangzuo, Ren Yun, and Huang Ronghuai. 2010. Frame-based calculus of solving arithmetic multistep addition and subtraction word problems. Education Technology and Computer Science, International Workshop, 2:476– 479. Lipu Zhou, Shuaixiang Dai, and Liwei Chen. 2015. Learn to solve algebra word problems using quadratic programming. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. 896
2016
84
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 897–907, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Embeddings for Word Sense Disambiguation: An Evaluation Study Ignacio Iacobacci1, Mohammad Taher Pilehvar2 and Roberto Navigli1 1Department of Computer Science, Sapienza University of Rome, Italy 2Language Technology Lab, University of Cambridge, UK {iacobacci,navigli}@di.uniroma1.it [email protected] Abstract Recent years have seen a dramatic growth in the popularity of word embeddings mainly owing to their ability to capture semantic information from massive amounts of textual content. As a result, many tasks in Natural Language Processing have tried to take advantage of the potential of these distributional models. In this work, we study how word embeddings can be used in Word Sense Disambiguation, one of the oldest tasks in Natural Language Processing and Artificial Intelligence. We propose different methods through which word embeddings can be leveraged in a state-of-the-art supervised WSD system architecture, and perform a deep analysis of how different parameters affect performance. We show how a WSD system that makes use of word embeddings alone, if designed properly, can provide significant performance improvement over a state-ofthe-art WSD system that incorporates several standard WSD features. 1 Introduction Embeddings represent words, or concepts in a low-dimensional continuous space. These vectors capture useful syntactic and semantic information, such as regularities in language, where relationships are characterized by a relation-specific vector offset. The ability of embeddings to capture knowledge has been exploited in several tasks, such as Machine Translation (Mikolov et al., 2013, MT), Sentiment Analysis (Socher et al., 2013), Word Sense Disambiguation (Chen et al., 2014, WSD) and Language Understanding (Mesnil et al., 2013). Supervised WSD is based on the hypothesis that contextual information provides a good approximation to word meaning, as suggested by Miller and Charles (1991): semantically similar words tend to have similar contextual distributions. Recently, there have been efforts on leveraging embeddings for improving supervised WSD systems. Taghipour and Ng (2015) showed that the performance of conventional supervised WSD systems can be increased by taking advantage of embeddings as new features. In the same direction, Rothe and Sch¨utze (2015) trained embeddings by mixing words, lexemes and synsets, and introducing a set of features based on calculations on the resulting representations. However, none of these techniques takes full advantage of the semantic information contained in embeddings. As a result, they generally fail in providing substantial improvements in WSD performance. In this paper, we provide for the first time a study of different techniques for taking advantage of the combination of embeddings with standard WSD features. We also propose an effective approach for leveraging embeddings in WSD, and show that this can provide significant improvement on multiple standard benchmarks. 2 Word Embeddings An embedding is a representation of a topological object, such as a manifold, graph, or field, in a certain space in such a way that its connectivity or algebraic properties are preserved (Insall et al., 2015). Presented originally by Bengio et al. (2003), word embeddings aim at representing, i.e., embedding, the ideal semantic space of words in a real-valued continuous vector space. In contrast to traditional distributional techniques, such as Latent Semantic Analysis (Landauer and Dutnais, 1997, LSA) and Latent Dirichlet Allocation (Blei et al., 2003, LDA), Bengio et al. (2003) designed a 897 feed-forward neural network capable of predicting a word given the words preceding (i.e., leading up to) that word. Collobert and Weston (2008) presented a much deeper model consisting of several layers for feature extraction, with the objective of building a general architecture for NLP tasks. A major breakthrough occurred when Mikolov et al. (2013) put forward an efficient algorithm for training embeddings, known as Word2vec. A similar model to Word2vec was presented by Pennington et al. (2014, GloVe), but instead of using latent features for representing words, it makes an explicit representation produced from statistical calculation on word countings. Numerous efforts have been made to improve different aspects of word embeddings. One way to enhance embeddings is to represent more finegrained semantic items, such as word senses or concepts, given that conventional embeddings conflate different meanings of a word into a single representation. Several research studies have investigated the representation of word senses, instead of words (Reisinger and Mooney, 2010; Huang et al., 2012; Camacho-Collados et al., 2015b; Iacobacci et al., 2015; Rothe and Sch¨utze, 2015). Another path of research is aimed at refining word embeddings on the basis of additional information from other knowledge resources (Faruqui et al., 2015; Yu and Dredze, 2014). A good example of this latter approach is that proposed by Faruqui et al. (2015), which improves pre-trained word embeddings by exploiting the semantic knowledge from resources such as PPDB1 (Ganitkevitch et al., 2013), WordNet (Miller, 1995) and FrameNet (Baker et al., 1998). In the following section we discuss how embeddings can be integrated into an important lexical semantic task, i.e., Word Sense Disambiguation. 3 Word Sense Disambiguation Natural language is inherently ambiguous. Most commonly-used words have several meanings. In order to identify the intended meaning of a word one has to analyze the context in which it appears by directly exploiting information from raw texts. The task of automatically assigning predefined meanings to words in contexts, known as Word Sense Disambiguation, is a fundamental task in computational lexical semantics (Navigli, 2009). There are four conventional approaches to 1www.paraphrase.org/#/download WSD which we briefly explain in the following. 3.1 Supervised methods These methods make use of manually senseannotated data, which are curated by human experts. They are based on the assumption that a word’s context can provide enough evidence for its disambiguation. Since manual sense annotation is a difficult and time-consuming process, something known as the ”knowledge acquisition bottleneck” (Pilehvar and Navigli, 2014), supervised methods are not scalable and they require repetition of a comparable effort for each new language. Currently, the best performing WSD systems are those based on supervised learning. It Makes Sense (Zhong and Ng, 2010, IMS) and the system of Shen et al. (2013) are good representatives for this category of systems. We provide more information on IMS in Section 4.1. 3.2 Unsupervised methods These methods create their own annotated corpus. The underlying assumption is that similar senses occur in similar contexts, therefore it is possible to group word usages according to their shared meaning and induce senses. These methods lead to the difficulty of mapping their induced senses into a sense inventory and they still require manual intervention in order to perform such mapping. Examples of this approach were studied by Agirre et al. (2006), Brody and Lapata (2009), Manandhar et al. (2010), Van de Cruys and Apidianaki (2011) and Di Marco and Navigli (2013). 3.3 Semi-supervised methods Other methods, called semi-supervised, take a middle-ground approach. Here, a small manuallyannotated corpus is usually used as a seed for bootstrapping a larger annotated corpus. Examples of these approaches were presented by Mihalcea and Faruque (2004). A second option is to use a wordaligned bilingual corpus approach, based on the assumption that an ambiguous word in one language could be unambiguous in the context of a second language, hence helping to annotate the sense in the first language (Ng and Lee, 1996). 3.4 Knowledge-based methods These methods are based on existing lexical resources, such as knowledge bases, semantic networks, dictionaries and thesauri. Their main feature is their coverage, since they function indepen898 dently of annotated data and can exploit the graph structure of semantic networks to identify the most suitable meanings. These methods are able to obtain wide coverage and good performance using structured knowledge, rivaling supervised methods (Patwardhan and Pedersen, 2006; Mohammad and Hirst, 2006; Agirre et al., 2010; Guo and Diab, 2010; Ponzetto and Navigli, 2010; Miller et al., 2012; Agirre et al., 2014; Moro et al., 2014; Chen et al., 2014; Camacho-Collados et al., 2015a). 3.5 Standard WSD features As was analyzed by Lee and Ng (2002), conventional WSD systems usually make use of a fixed set of features to model the context of a word. The first feature is based on the words in the surroundings of the target word. The feature usually represents the local context as a binary array, where each position represents the occurrence of a particular word. Part-of-speech (POS) tags of the neighboring words have also been used extensively as a WSD feature. Local collocations represent another standard feature that captures the ordered sequences of words which tend to appear around the target word (Firth, 1957). Though not very popular, syntactic relations have also been studied as a possible feature (Stetina et al., 1998) in WSD. More sophisticated features have also been studied. Examples are distributional semantic models, such as Latent Semantic Analysis (Van de Cruys and Apidianaki, 2011) and Latent Dirichlet Allocation (Cai et al., 2007). Inasmuch as they are the dominant distributional semantic model, word embeddings have also been applied as features to WSD systems. In this paper we study different methods through which word embeddings can be used as WSD features. 3.6 Word Embeddings as WSD features Word embeddings have become a prominent technique in distributional semantics. These methods leverage neural networks in order to model the contexts in which a word is expected to appear. Thanks to their ability in efficiently learning the semantics of words, word embeddings have been applied to a wide range of NLP applications. Several studies have also investigated their integration into the Word Sense Disambiguation setting. These include the works of Zhong and Ng (2010), Taghipour and Ng (2015), Rothe and Sch¨utze (2015), and Chen et al. (2014), which leverage embeddings for supervised (the former three) and knowledge-based (the latter) WSD. However, to our knowledge, no previous work has investigated methods for integrating word embeddings in WSD and the role that different training parameters can play. In this paper, we put forward a framework for a comprehensive evaluation of different methods of leveraging word embeddings as WSD features in a supervised WSD system. We provide an analysis of the impact of different parameters in the training of embeddings on the WSD performance. We consider four different strategies for integrating a pre-trained word embedding in a supervised WSD system, discussed in what follows. 3.6.1 Concatenation Concatenation is our first strategy, which is inspired by the model of Bengio et al. (2003). This method consists of concatenating the vectors of the words surrounding a target word into a larger vector that has a size equal to the aggregated dimensions of all the individual embeddings. Let wij be the weight associated with the ith dimension of the vector of the jth word in the sentence, let D be the dimensionality of this vector, and W be the window size which is defined as the number of words on a single side. We are interested in representing the context of the Ith word in the sentence. The ith dimension of the concatenation feature vector, which has a size of 2WD, is computed as follows: ei = ( wi mod D, I−W+⌊i D ⌋ if ⌊i D⌋< W wi mod D, I−W+1+⌊i D ⌋ otherwise where mod is the modulo operation, i.e., the remainder after division. 3.6.2 Average As its name indicates, the average strategy computes the centroid of the embeddings of all the surrounding words. The formula divides each dimension by 2W since the number of context words is twice the window size: ei = I+W X j=I−W j̸=I wij 2W 3.6.3 Fractional decay Our third strategy for constructing a feature vector on the basis of the context word embeddings is inspired by the way Word2vec combines the words in the context. Here, the importance of a word 899 for our representation is assumed to be inversely proportional to its distance from the target word. Hence, surrounding words are weighted based on their distance from the target word: ei = I+W X j=I−W j̸=I wij W −|I −j| W 3.6.4 Exponential decay Exponential decay functions similarly to the fractional decay, which gives more importance to the close context, but in this case the weighting in the former is performed exponentially: ei = I+W X j=I−W j̸=I wij(1 −α)|I−j|−1 where α = 1 −0.1(W−1)−1 is the decay parameter. We choose the parameter in such a way that the immediate surrounding words contribute 10 times more than the last words on both sides of the window. 4 Framework Our goal was to experiment with a state-of-the-art conventional supervised WSD system and a varied set of word embedding techniques. In this section we discuss the WSD system as well as the word embeddings used in our experiments. 4.1 WSD System We selected It Makes Sense (Zhong and Ng, 2010, IMS) as our underlying framework for supervised WSD. IMS provides an extensible and flexible platform for supervised WSD by allowing the verification of different WSD features and classification techniques. By default, IMS makes use of three sets of features: (1) POS tags of the surrounding words, with a window of three words on each side, restricted by the sentence boundary, (2) the set of words that appear in the context of the target word after stopword removal, and (3) local collocations which consist of 11 features around the target word. IMS uses a linear support vector machine (SVM) as its classifier. 4.2 Embedding Features We take the real-valued word embeddings as new features of IMS and introduce them into the system without performing any further modifications. We carried out experiments with three different embeddings: • Word2vec (Mikolov et al., 2013): We used the Word2vec toolkit2 to learn 400 dimensional vectors on the September-2014 dump of the English Wikipedia which comprises around three billion tokens. We chose the Skip-gram architecture with the negative sampling set to 10. The sub-sampling of frequent words was set to 10−3 and the window size to 10 words. • C&W (Collobert and Weston, 2008): These 50 dimensional embeddings were learnt using a neural network model, consisting of several layers for feature extraction. The vectors were trained on a subset of the English Wikipedia.3 • Retrofitting: Finally, we used the approach of Faruqui et al. (2015) to retrofit our Word2vec vectors. We used the Paraphrase Database (Ganitkevitch et al., 2013, PPDB) as external knowledge base for retrofitting and set the number of iterations to 10. 5 Experiments We evaluated the performance of our embeddingbased WSD system on two standard WSD tasks: lexical sample and all-words. In all the experiments in this section we used the exponential decay strategy (cf. Section 3.6) and a window size of ten words on each side of the target word. 5.1 Lexical Sample WSD Experiment The lexical sample WSD tasks provide training datasets in which different occurrences of a small set of words are sense annotated. The goal is for a WSD system to analyze the contexts of the individual senses of these words and to capture clues that can be used for distinguishing different senses of a word from each other at the test phase. Datasets. As our benchmark for the lexical sample WSD, we chose the Senseval-2 (Edmonds and Cotton, 2001), Senseval-3 (Mihalcea et al., 2004), and SemEval-2007 (Pradhan et al., 2007) English Lexical Sample WSD tasks. The former two cover nouns, verbs and adjectives in their datasets whereas the latter task focuses on nouns and verbs 2code.google.com/archive/p/word2vec/ 3http://ronan.collobert.com/senna/ 900 Task Training Test noun verb adjective noun verb adjective Senseval-2 (SE2) 4851 3566 755 1740 1806 375 Senseval-3 (SE3) 3593 3953 314 1807 1978 159 SemEval-07 (SE7) 13287 8987 − 2559 2292 − Table 1: The number of sentences per part of speech in the datasets of the English lexical sample tasks we considered for our experiments. System SE2 SE3 SE7 IMS (2010) 65.3 72.9 87.9 Taghipour and Ng (2015) 66.2 73.4 − AutoExtend (2015) 66.5 73.6 − IMS + C&W 64.3 70.1 88.0 IMS + Word2vec 69.9 75.2 89.4 IMS + Retrofitting 65.9 72.8 88.3 C&W feature only 55.0 61.6 83.4 Word2vec feature only 65.6 69.4 87.0 Retrofitting feature only 67.2 72.7 88.0 Table 2: F1 performance on the three English lexical sample datasets. IMS + X denotes the improved IMS system when the X set of word representations were used as additional features. We also show in the last three rows the results for the IMS system when word representations were used as the only features. only. Table 1 shows the number of sentences per part of speech for the training and test datasets of each of these tasks. Comparison systems. In addition to the vanilla IMS system in its default setting we compared our system against two recent approaches that also modify the IMS system so that it can benefit from the additional knowledge derived from word embeddings for improved WSD performance: (1) the system of Taghipour and Ng (2015), which combines word embeddings of Collobert and Weston (2008) using the concatenation strategy (cf. Section 3.6) and introduces the combined embeddings as a new feature in addition to the standard WSD features in IMS; and (2) AutoExtend (Rothe and Sch¨utze, 2015), which constructs a whole new set of features based on vectors made from words, senses and synsets of WordNet and incorporates them in IMS. 5.1.1 Lexical sample WSD results Table 2 shows the F1 performance of the different systems on the three lexical sample datasets. As can be seen, the IMS + Word2vec system improves over all comparison systems including those that combine standard WSD and embedding features (i.e., the system of Taghipour and Ng (2015) and AutoExtend) across all the datasets. This shows that our proposed strategy for introducing word embeddings into the IMS system on the basis of exponential decay was beneficial. In the last three rows of the table, we also report the performance of the WSD systems that leverage only word embeddings as their features and do not incorporate any standard WSD feature. It can be seen that word embeddings, in isolation, provide competitive performance, which proves their capability in obtaining the information captured by standard WSD features. Among different embeddings, the retrofitted vectors provide the best performance when used in isolation. 5.2 All-Words WSD Experiments The goal in this task is to disambiguate all the content words in a given text. In order to learn models for disambiguating a large set of content words, a high-coverage sense-annotated corpus is required. Since all-words tasks do not usually provide any training data, the challenge here is not only to learn accurate disambiguation models from the training data, as is the case in the lexical sample task, but also to gather high-coverage training data and to learn disambiguation models for as many words as possible. Training corpus. As our training corpus we opted for two available resources: SemCor and OMSTI. SemCor (Miller et al., 1994) is a manually sense-tagged corpus created by the WordNet project team at Princeton University. The dataset is a subset of the English Brown Corpus and comprises around 360,000 words, providing annotations for more than 200K content words.4 OM4We used automatic mappings to WordNet 3.0 provided in web.eecs.umich.edu/∼mihalcea/downloads.html. 901 STI5 (One Million Sense-Tagged for Word Sense Disambiguation and Induction) was constructed based on the DSO corpus (Ng and Lee, 1996) and provides annotations for around 42K different nouns, verbs, adjectives, and adverbs. Datasets. As benchmark for this experiment, we considered the Senseval-2 (Edmonds and Cotton, 2001), Senseval-3 (Snyder and Palmer, 2004), and SemEval-2007 (Pradhan et al., 2007) English allwords tasks. There are 2474, 2041, and 465 words for which at least one of the occurrences has been sense annotated in the Senseval-2, Senseval-3 and SemEval-2007 datasets, respectively. Experimental setup. Similarly to the lexical sample experiment, in the all-words setting we used the exponential decay strategy (cf. Section. 4.2) in order to incorporate word embeddings as new features in IMS. For this experiment, we only report the results for the best-performing word embeddings in the lexical sample experiment, i.e., Word2vec (see Table 2). Comparison systems. We benchmarked the performance of our system against five other systems. Similarly to our lexical sample experiment, we compared against the vanilla IMS system and the work of Taghipour and Ng (2015). In addition, we performed experiments on the nouns subsets of the datasets in order to be able to provide comparisons against two other WSD approaches: Babelfy (Moro et al., 2014) and Muffin (CamachoCollados et al., 2015a). Babelfy is a multilingual knowledge-based WSD and Entity Linking algorithm based on the semantic network of BabelNet. Muffin is a multilingual sense representation technique that combines the structural knowledge derived from semantic networks with the distributional statistics obtained from text corpora. The system uses sense-based representations for performing WSD. Camacho-Collados et al. (2015a) also proposed a hybrid system that averages the disambiguation scores of IMS with theirs (shown as “Muffin + IMS” in our tables). We also report the results for UKB w2w (Agirre and Soroa, 2009), another knowledge-based WSD approach based on Personalized PageRank (Haveliwala, 2002). Finally, we also carried out experiments with the pre-trained models6 that are pro5www.comp.nus.edu.sg/˜nlp/corpora.html 6www.comp.nus.edu.sg/˜nlp/sw/models. tar.gz System SE2 SE3 SE7 MFS baseline 60.1 62.3 51.4 IMS (Zhong and Ng, 2010) 68.2 67.6 58.3 Taghipour and Ng (2015) − 68.2 − IMS (pre-trained models) 67.7 67.5 58.0 IMS (SemCor) 62.5 65.0 56.5 IMS (OMSTI) 67.0 66.4 57.6 IMS + Word2vec (SemCor) 63.4 65.3 57.8 IMS + Word2vec (OMSTI) 68.3 68.2 59.1 Table 3: F1 performance on different English allwords WSD datasets. System SE2 SE3 SE7 MFS baseline 71.6 70.3 65.8 Babelfy − 68.3 62.7 Muffin − − 66.0 Muffin + IMS − − 68.5 UBK w2w − 65.3 56.0 IMS (pre-trained models) 77.5 74.0 66.5 IMS (SemCor) 73.0 70.8 64.2 IMS (OMSTI) 76.6 73.3 67.7 IMS + Word2vec (SemCor) 74.2 70.1 68.6 IMS + Word2vec (OMSTI) 77.7 74.1 71.5 Table 4: F1 performance in the nouns subsets of different all-words WSD datasets. vided with the IMS toolkit, as well as IMS trained on our two training corpora, i.e., SemCor and OMSTI. 5.2.1 All-words WSD results Tables 3 and 4 list the performance of different systems on, respectively, the whole and the nounsubset datasets of the three all-words WSD tasks. Similarly to our lexical sample experiment, the IMS + Word2vec system provided the best performance across datasets and benchmarks. The coupling of Word2vec embeddings to the IMS system proved to be consistently helpful. Among the two training corpora, as expected, OMSTI provided a better performance owing to its considerably larger size and higher coverage. Another point to be noted here is the difference between results of the IMS with the pre-trained models and those trained on the OMSTI corpus. Since we used the same system configuration across the two runs, we conclude that the OMSTI corpus is either substantially smaller or less representative than the corpus used by Zhong and Ng (2010) for building 902 the pre-trained models of IMS. Despite this fact, the IMS + Word2vec system can consistently improve the performance of IMS (pre-trained models) across the three datasets. This shows that a proper introduction of word embeddings into a supervised WSD system can compensate the negative effect of using lower quality training data. 6 Analysis We carried out a series of experiments in order to check the impact of different system parameters on the final WSD performance. We were particularly interested in observing the role that various training parameters of embeddings as well as WSD features have in the WSD performance. We used the Senseval-2 English Lexical Sample task as our benchmark for this analysis. 6.1 The effect of different parameters Table 5 shows F1 performance of different configurations of our system on the task’s dataset. We studied five different parameters: the type (i.e., w2v or Retrofitting) and dimensionality (200, 400, or 800) of the embeddings, combination strategy (concatenation, average, fractional or exponential decay), window size (5, 10, 20 and words), and WSD features (collocations, POS tags, surrounding words, all of these or none). All the embeddings in this experiment were trained on the same training data and, unless specified, with the same configuration as described in Section 4.2. As baseline we show in the table the performance of the vanilla WSD system, i.e., IMS. For better readability, we report the differences between the performances of our system and the baseline. We observe that the addition of Word2vec word embeddings to IMS (+w2v in the table) was beneficial in all settings. Among combination strategies, concatenation and average produced the smallest gain and did not benefit from embeddings of higher dimensionality. However, the other two strategies, i.e., fractional and exponential decay, showed improved performance with the increase in the size of the employed embeddings, irrespective of the WSD features. The window size showed a peak of performance when 10 words were taken in the case of standard word embeddings. For retrofitting, a larger window seems to have been beneficial, except when no standard WSD features were taken. Another point to note here is that, among the three WSD features, POS proved to be the most effective one while due to the nature of the embeddings, the exclusion of the Surroundings features in addition to the inclusion of the embeddings was largely beneficial in all the configurations. Furthermore, we found that the best configurations for this task were the ones that excluded Surroundings, and included w2v embeddings with a window of 10 and 800 dimensions with exponential decay strategy (70.2% of F1 performance) as well as the configuration used in our experiments, with all the standard features, and w2v embeddings with 400 dimensions, a window of 10 and exponential decay strategy (69.9% of F1 performance). The retrofitted embeddings provided lower performance improvement when added on top of standard WSD features. However, when they were used in isolation (shown in the right-most column), the retrofitted embeddings interestingly provided the best performance, improving the vanilla WSD system with standard features by 2.8 percentage points (window size 5, dimensionality 800). In fact, the standard features had a destructive role in this setting as the overall performance was reduced when they were combined with the retrofitted embeddings. Finally, we point out the missing values in the configuration with 800 dimensions and a window size of 20. Due to the nature of the concatenation strategy, this configuration greatly increased the number of features from embeddings only, reaching 32000 (800 x 2 x 20) features. Not only was the concatenation strategy unable to take advantage of the increased dimensionality, but also it was not able to scale. These results show that a state-of-the-art supervised WSD system can be constructed without incorporating any of the conventional WSD features, which in turn demonstrates the potential of retrofitted word embeddings for WSD. This finding is interesting, because it provides the basis for further studies on how synonymy-based semantic knowledge introduced by retrofitting might play a role in effective WSD, and how retrofitting might be optimized for improved WSD. Indeed, such studies may provide the basis for re-designing the standard WSD features. 6.2 Comparison of embedding types We were also interested in comparing different types of embeddings in our WSD framework. We tested for seven sets of embeddings with dif903 Collocations ✓ ✓ ✓ POS ✓ ✓ ✓ Surroundings ✓ ✓ ✓ Dimensionality 200 400 800 200 400 800 200 400 800 200 400 800 200 400 800 System Strategy Window IMS 62.4 63.7 62.0 65.2 − + w2v Con 5 +0.1 +0.4 +0.1 -0.1 +0.3 +0.2 +0.1 +0.5 +0.1 -0.2 +0.1 +0.1 46.9 48.7 44.2 10 -0.1 +0.5 +0.3 -0.1 +0.5 0.0 +0.6 +1.0 +0.5 -0.1 +0.1 -0.1 48.6 51.1 49.7 20 -0.2 +0.4 — -0.3 +0.3 — +0.7 +1.5 — -0.5 +0.4 — 52.5 54.1 — + w2v Avg 5 +0.8 +1.0 +1.0 +1.3 +1.3 +1.4 +3.9 +4.2 +4.1 +1.7 +1.4 +1.6 58.3 59.9 61.3 10 +0.8 +0.9 +0.9 +0.6 +0.7 +0.8 +3.6 +3.7 +3.9 +0.6 +0.6 +0.7 63.7 64.1 64.7 20 +0.3 +0.3 +0.3 +0.5 +0.3 +0.4 +2.4 +2.3 +2.3 +0.2 +0.2 +0.2 62.7 63.1 63.5 + w2v Frac 5 +3.9 +4.9 +5.2 +4.2 +4.6 +5.3 +6.3 +6.6 +6.8 +3.0 +3.6 +3.8 61.2 63.1 64.8 10 +4.9 +5.8 +5.7 +4.6 +5.2 +5.1 +5.9 +7.0 +7.4 +3.6 +4.3 +4.0 61.3 63.8 65.2 20 +4.4 +4.5 +4.7 +3.7 +4.0 +4.3 +4.8 +6.1 +5.4 +3.2 +3.3 +3.4 61.2 63.4 63.9 + w2v Exp 5 +4.1 +5.0 +5.2 +4.1 +4.7 +5.0 +6.1 +6.1 +6.4 +2.9 +3.5 +3.7 62.3 64.7 64.9 10 +5.4 +6.6 +6.4 +4.9 +5.8 +6.0 +7.2 +7.7 +8.2 +4.1 +4.7 +4.6 63.2 65.6 66.9 20 +5.2 +5.6 +5.9 +4.4 +5.1 +4.9 +6.1 +7.0 +6.8 +3.9 +4.3 +4.2 61.9 64.4 65.2 + Ret Con 5 -0.1 -0.1 -0.1 -0.1 -0.1 0.0 +0.1 +0.1 -0.1 -0.1 +0.1 +0.1 50.7 53.5 50.9 10 +0.1 0.0 0.0 -0.3 0.0 0.0 +0.1 +0.2 +0.1 0.0 0.0 0.0 52.1 54.2 53.4 20 0.0 0.0 — -0.2 0.0 — +0.7 +0.3 — 0.0 -0.1 — 53.7 54.8 — + Ret Avg 5 +0.1 0.0 -0.1 +0.1 0.0 -0.1 +0.8 +0.8 +0.7 +0.1 0.0 +0.1 60.7 60.3 60.5 10 -0.2 -0.1 0.0 -0.2 -0.3 0.0 +0.7 +0.7 +0.5 0.0 +0.1 +0.1 58.9 58.4 58.2 20 -0.1 +0.1 +0.1 -0.2 -0.2 -0.2 +0.5 +0.4 +0.4 0.0 0.0 0.0 56.5 56.0 55.5 + Ret Frac 5 +1.4 +1.3 +1.2 +1.2 +1.0 +0.9 +3.3 +3.1 +2.9 +0.5 +0.3 +0.3 66.5 67.3 67.7 10 +1.7 +1.4 +1.2 +1.5 +1.4 +1.2 +5.2 +4.7 +4.5 +0.7 +0.8 +0.6 64.4 66.2 66.1 20 +2.2 +2.2 +1.8 +2.2 +1.8 +2.0 +6.7 +6.4 +5.9 +1.3 +1.2 +1.0 64.0 64.2 64.7 + Ret Exp 5 +1.1 +1.1 +1.1 +0.8 +0.8 +0.7 +2.7 +2.6 +2.2 +0.3 +0.3 +0.3 66.8 67.7 68.0 10 +1.5 +1.3 +1.0 +1.2 +1.1 +1.0 +4.4 +4.2 +3.8 +0.7 +0.7 +0.3 65.9 67.2 67.5 20 +1.8 +1.7 +1.5 +1.7 +1.5 +1.5 +6.3 +5.9 +5.4 +1.1 +0.8 +0.7 65.1 65.8 66.5 Table 5: F1 performance of different models on the Senseval-2 English Lexical Sample task. We show results for varied dimensionality (200, 400, and 800), window size (5, 10 and 20 words) and combination strategy, i.e., Concatenation (Con), Averaging (Avg), Fractional decay (Frac), and Exponential decay (Exp). To make the table easier to read, we highlight each cell according to the relative performance gain in comparison to the IMS baseline (top row in the table). ferent dimensionalities and learning techniques: Word2vec embeddings trained on Wikipedia, with the Skip-gram model for dimensionalities 50, 300 and 500 (for comparison reasons) and CBOW with 300 dimensions, Word2vec trained on the Google News corpus with 300 dimensions and the Skipgram model, the 300 dimensional embeddings of GloVe, and the 50 dimensional C&W embeddings. Additionally we include experiments on a non-embedding model, a PMI-SVD vector space model trained by Baroni et al. (2014). Table 6 lists the performance of our system with different word representations in vector space on the Senseval-2 English Lexical Sample task. The results corroborate the findings of Levy et al. (2015) that Skip-gram is more efficient in captur904 Word representations Dim. Combination strategy Concatenation Average Fractional Exponential Skip-gram - GoogleNews 300 65.5 65.5 69.4 69.6 GloVe 300 61.7 66.3 66.7 68.3 CBOW - Wiki 300 65.1 65.4 68.9 68.8 Skip-gram - Wiki 300 65.2 65.6 68.9 69.7 PMI - SVD - Wiki 500 65.5 65.3 67.3 66.8 Skip-gram - Wiki 500 65.1 65.6 69.1 69.9 Collobert & Weston 50 58.6 67.3 62.9 64.3 Skip-gram - Wiki 50 65.0 65.7 68.3 68.6 Table 6: F1 percentage performance on the Senseval-2 English Lexical Sample dataset with different word representations models, vector dimensionalities (Dim.) and combination strategies. ing the semantics than CBOW and GloVe. Additionally, the use of embeddings with decay fares well, independently of the type of embedding. The only exception is the C&W embeddings, for which the average strategy works best. We attribute this behavior to the nature of these embeddings, rather than to their dimensionality. This is shown in our comparison against the 50-dimensional Skip-gram embeddings trained on the Wikipedia corpus (bottom of Table 6), which performs well with both decay strategies, outperforming C&W embeddings. 7 Conclusions In this paper we studied different ways of integrating the semantic knowledge of word embeddings in the framework of WSD. We carried out a deep analysis of different parameters and strategies across several WSD tasks. We draw three main findings. First, word embeddings can be used as new features to improve a state-of-the-art supervised WSD that only uses standard features. Second, integrating embeddings on the basis of an exponential decay strategy proves to be more consistent in producing high performance than the other conventional strategies, such as vector concatenation and centroid. Third, the retrofitted embeddings that take advantage of the knowledge derived from semi-structured resources, when used as the only feature for WSD can outperform stateof-the-art supervised models which use standard WSD features. However, the best performance is obtained when standard WSD features are augmented with the additional knowledge from Word2vec vectors on the basis of a decay function strategy. Our hope is that this work will serve as the first step for further studies on re-designing standard WSD features. We release at https:// github.com/iiacobac/ims_wsd_emb all the codes and resources used in our experiments in order to provide a framework for research on the evaluation of new VSM models in the WSD framework. As future work, we plan to investigate the possibility of designing word representations that best suit the WSD framework. Acknowledgments The authors gratefully acknowledge the support of the ERC Starting Grant MultiJEDI No. 259234. References Eneko Agirre and Aitor Soroa. 2009. Personalizing PageRank For Word Sense Disambiguation. In Proceedings of the 12th Conference of the EACL, pages 33–41, Athens, Greece. Eneko Agirre, David Mart´ınez, Oier L´opez de Lacalle, and Aitor Soroa. 2006. Two graph-based algorithms for state-of-the-art wsd. In Proceedings of the 2006 EMNLP, pages 585–593, Sydney, Australia. Eneko Agirre, Aitor Soroa, and Mark Stevenson. 2010. Graph-based word sense disambiguation of biomedical documents. Bioinformatics, 26(22):2889–2896. Eneko Agirre, Oier L´opez de Lacalle, and Aitor Soroa. 2014. Random Walks for Knowledge-based Word Sense Disambiguation. Comp. Ling., 40(1):57–84. Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet Project. In Proceedings of the 36th ACL, pages 86–90, Montreal, Quebec, Canada. Marco Baroni, Georgiana Dinu, and Germ´an Kruszewski. 2014. Don’t count, predict! A 905 systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the 52nd ACL, volume 1, pages 238–247, Baltimore, Maryland. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A Neural Probabilistic Language Model. The Journal of Machine Learning Research, 3:1137–1155. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet Allocation. Journal of Machine Learning Research, 3:993–1022. Samuel Brody and Mirella Lapata. 2009. Bayesian word sense induction. In Proceedings of the 12th Conference of the EACL, pages 103–111, Athens, Greece. Jun Fu Cai, Wee Sun Lee, and Yee Whye Teh. 2007. NUS-ML:Improving Word Sense Disambiguation Using Topic Features. In Proceedings of the SemEval-2007, pages 249–252, Prague, Czech Republic. Jos´e Camacho-Collados, Mohammad Taher Pilehvar, and Roberto Navigli. 2015a. A Unified Multilingual Semantic Representation of Concepts. In Proceedings of the 53rd ACL, volume 1, pages 741–751, Beijing, China. Jos´e Camacho-Collados, Mohammad Taher Pilehvar, and Roberto Navigli. 2015b. NASARI: a novel approach to a semantically-aware representation of items. In Proceedings of the 2015 NAACL, pages 567–577, Denver, Colorado. Xinxiong Chen, Zhiyuan Liu, and Maosong Sun. 2014. A unified model for word sense representation and disambiguation. In Proceedings of the 2014 EMNLP, pages 1025–1035, Doha, Qatar. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th ICML, pages 160–167, Helsinki, Finland. Antonio Di Marco and Roberto Navigli. 2013. Clustering and diversifying web search results with graph-based word sense induction. Comp. Ling., 39(3):709–754. Philip Edmonds and Scott Cotton. 2001. Senseval-2: Overview. In The Proceedings of the 2nd International Workshop on Evaluating Word Sense Disambiguation Systems, pages 1–5, Toulouse, France. Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retrofitting Word Vectors to Semantic Lexicons. In Proceedings of the 2015 NAACL, pages 1606–1615, Denver, Colorado. J. R. Firth. 1957. A synopsis of linguistic theory 193055. Studies in Linguistic Analysis (special volume of the Philological Society), 1952-59:1–32. Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The Paraphrase Database. In Proceedings 2013 NAACL, pages 758– 764, Atlanta, Georgia. Weiwei Guo and Mona Diab. 2010. Combining Orthogonal Monolingual and Multilingual Sources of Evidence for All Words WSD. In Proceedings of the 48th ACL, pages 1542–1551, Uppsala, Sweden. Taher H. Haveliwala. 2002. Topic-sensitive PageRank. In Proceedings of the 11th international conference on World Wide Web, pages 517–526, Honolulu, Hawai. Eric H. Huang, Richard Socher, Christopher D. Manning, and Andrew Y. Ng. 2012. Improving Word Representations Via Global Context And Multiple Word Prototypes. In Proceedings of 50th ACL, volume 1, pages 873–882, Jeju Island, South Korea. Ignacio Iacobacci, Mohammad Taher Pilehvar, and Roberto Navigli. 2015. SensEmbed: Learning Sense Embeddings for Word and Relational Similarity. In Proceedings of the 53rd ACL, volume 1, pages 95–105, Beijing, China. Matt Insall, Todd Rowland, and Eric W. Weisstein. 2015. “Embedding”. From MathWorld– A Wolfram Web Resource (access Sep 11, 2015) http://mathworld.wolfram.com/ Embedding.html. Thomas K. Landauer and Susan T. Dutnais. 1997. A Solution to Platos Problem: The Latent Semantic Analysis Theory of Acquisition, Induction, and Representation of Knowledge. Psychological Review, 104(2):211–240. Yoong Keok Lee and Hwee Tou Ng. 2002. An Empirical Evaluation of Knowledge Sources and Learning Algorithms for Word Sense Disambiguation. In Proceedings of the 2002 EMNLP, volume 10, pages 41–48, Philadelphia, Pennsylvania. Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. TACL, 3:211–225. Suresh Manandhar, Ioannis P. Klapaftis, Dmitriy Dligach, and Sameer S. Pradhan. 2010. Semeval-2010 task 14: Word sense induction & disambiguation. In Proceedings of SemEval-2010, pages 63–68, Uppsala, Sweden. Gr´egoire Mesnil, Xiaodong He, Li Deng, and Yoshua Bengio. 2013. Investigation of Recurrent-neuralnetwork Architectures and Learning Methods for Spoken Language Understanding. In INTERSPEECH, pages 3771–3775, Lyon, France. Rada Mihalcea and Ehsanul Faruque. 2004. Senselearner: Minimally Supervised Word Sense Disambiguation for All Words in Open Text. In Proceedings of ACL/SIGLEX Senseval-3, volume 3, pages 155–158, Barcelona, Spain. 906 Rada Mihalcea, Timothy Chklovski, and Adam Kilgarriff. 2004. The Senseval-3 English Lexical Sample Task. In Proceedings of ACL/SIGLEX Senseval-3, pages 25–28, Barcelona, Spain. Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168. George A. Miller and Walter G Charles. 1991. Contextual correlates of semantic similarity. Language and Cognitive Processes, 6(1):1–28. George A. Miller, Martin Chodorow, Shari Landes, Claudia Leacock, and Robert G Thomas. 1994. Using a Semantic Concordance for Sense Identification. In Proceedings of the Workshop on HLT, pages 240–243, Plainsboro, New Jersey. Tristan Miller, Chris Biemann, Torsten Zesch, and Iryna Gurevych. 2012. Using Distributional Similarity for Lexical Expansion in Knowledge-based Word Sense Disambiguation. In COLING, pages 1781–1796, Mumbai, India. George A. Miller. 1995. WordNet: A Lexical Database for English. Comm. ACM, 38(11):39–41. Saif Mohammad and Graeme Hirst. 2006. Determining Word Sense Dominance Using a Thesaurus. In Proceedings of the 11th Conference of EACL, pages 121–128, Trento, Italy. Andrea Moro, Alessandro Raganato, and Roberto Navigli. 2014. Entity Linking meets Word Sense Disambiguation: a Unified Approach. Transactions of the ACL, 2:231–244. Roberto Navigli. 2009. Word sense disambiguation: a survey. ACM COMPUTING SURVEYS, 41(2):1–69. Hwee Tou Ng and Hian Beng Lee. 1996. Integrating Multiple Knowledge Sources to Disambiguate Word Sense: An Exemplar-based Approach. In Proceedings of the 34th Meeting on ACL, pages 40–47, Santa Cruz, California. Siddharth Patwardhan and Ted Pedersen. 2006. Using WordNet-based Context Vectors to Estimate the Semantic Relatedness of Concepts. In Proceedings of the EACL 2006 Workshop Making Sense of Sense, volume 1501, pages 1–8, Trento, Italy. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of the 2014 EMNLP, pages 1532–1543, Doha, Qatar. Mohammad Taher Pilehvar and Roberto Navigli. 2014. A Large-scale Pseudoword-based Evaluation Framework for State-of-the-Art Word Sense Disambiguation. Computational Linguistics, 40(4):837– 881. Simone Paolo Ponzetto and Roberto Navigli. 2010. Knowledge-rich word sense disambiguation rivaling supervised systems. In Proceedings of the 48th ACL, pages 1522–1531, Uppsala, Sweden. Sameer S. Pradhan, Edward Loper, Dmitriy Dligach, and Martha Palmer. 2007. SemEval-2007 Task 17: English Lexical Sample, SRL and All Words. In Proceedings of the SemEval-2007, pages 87–92, Prague, Czech Republic. Joseph Reisinger and Raymond J. Mooney. 2010. Multi-Prototype Vector-Space Models of Word Meaning. In Proceedings of the 2010 Annual Conference of the NAACL, pages 109–117, Los Angeles, California. Sascha Rothe and Hinrich Sch¨utze. 2015. Autoextend: Extending word embeddings to embeddings for synsets and lexemes. In Proceedings of the 53rd ACL, volume 1, pages 1793–1803, Beijing, China. Hui Shen, Razvan Bunescu, and Rada Mihalcea. 2013. Coarse to Fine Grained Sense Disambiguation in Wikipedia. In *SEM 2013: The Secound Joint Conference on Lexical and Computational Semantics, pages 22–31, Atlanta, Georgia. Benjamin Snyder and Martha Palmer. 2004. The Senseval-3 English All-Words Task. In Proceedings of ACL/SIGLEX Senseval-3, pages 41–43, Barcelona, Spain. Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. In Proceedings of the 2013 EMNLP, pages 1631–1642, Seattle, USA. Jiri Stetina, Sadao Kurohashi, and Makoto Nagao. 1998. General Word Sense Disambiguation Method Based on a Full Sentential Context. In Usage of WordNet in Natural Language Processing, Proceedings of COLING-ACL Workshop, Montreal, Quebec, Canada. Kaveh Taghipour and Hwee Tou Ng. 2015. SemiSupervised Word Sense Disambiguation Using Word Embeddings in General and Specific Domains. In Proceedings of the 2015 Annual Conference of the NAACL, pages 314–323, Denver, Colorado. Tim Van de Cruys and Marianna Apidianaki. 2011. Latent Semantic Word Sense Induction and Disambiguation. In Proceedings of the 49th ACL, volume 1, pages 1476–1485, Portland, Oregon. Mo Yu and Mark Dredze. 2014. Improving Lexical Embeddings with Semantic Knowledge. In Proceedings of the 52nd ACL, volume 2, pages 545– 550, Baltimore, Maryland. Zhi Zhong and Hwee Tou Ng. 2010. It Makes Sense: A Wide-coverage Word Sense Disambiguation System for Free Text. In Proceedings of the 48th ACL, pages 78–83, Uppsala, Sweden. 907
2016
85
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 908–918, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Text Understanding with the Attention Sum Reader Network Rudolf Kadlec, Martin Schmid, Ondrej Bajgar & Jan Kleindienst IBM Watson V Parku 4, Prague, Czech Republic {rudolf kadlec,martin.schmid,obajgar,jankle}@cz.ibm.com Abstract Several large cloze-style context-questionanswer datasets have been introduced recently: the CNN and Daily Mail news data and the Children’s Book Test. Thanks to the size of these datasets, the associated text comprehension task is well suited for deep-learning techniques that currently seem to outperform all alternative approaches. We present a new, simple model that uses attention to directly pick the answer from the context as opposed to computing the answer using a blended representation of words in the document as is usual in similar models. This makes the model particularly suitable for questionanswering problems where the answer is a single word from the document. Ensemble of our models sets new state of the art on all evaluated datasets. 1 Introduction Most of the information humanity has gathered up to this point is stored in the form of plain text. Hence the task of teaching machines how to understand this data is of utmost importance in the field of Artificial Intelligence. One way of testing the level of text understanding is simply to ask the system questions for which the answer can be inferred from the text. A well-known example of a system that could make use of a huge collection of unstructured documents to answer questions is for instance IBM’s Watson system used for the Jeopardy challenge (Ferrucci et al., 2010). Cloze style questions (Taylor, 1953), i.e. questions formed by removing a phrase from a sentence, are an appealing form of such questions (for example see Figure 1). While the task is easy to evaluate, one can vary the context, the question Document: What was supposed to be a fantasy sports car ride at Walt Disney World Speedway turned deadly when a Lamborghini crashed into a guardrail. The crash took place Sunday at the Exotic Driving Experience, which bills itself as a chance to drive your dream car on a racetrack. The Lamborghini’s passenger, 36year-old Gary Terry of Davenport, Florida, died at the scene, Florida Highway Patrol said. The driver of the Lamborghini, 24-year-old Tavon Watson of Kissimmee, Florida, lost control of the vehicle, the Highway Patrol said. (...) Question: Officials say the driver, 24-year-old Tavon Watson, lost control of a Answer candidates: Tavon Watson, Walt Disney World Speedway, Highway Patrol, Lamborghini, Florida, (...) Answer: Lamborghini Figure 1: Each example consists of a context document, question, answer cadidates and, in the training data, the correct answer. This example was taken from the CNN dataset (Hermann et al., 2015). Anonymization of this example that makes the task harder is shown in Table 3. sentence or the specific phrase missing in the question to dramatically change the task structure and difficulty. One way of altering the task difficulty is to vary the word type being replaced, as in (Hill et al., 2015). The complexity of such variation comes from the fact that the level of context understanding needed in order to correctly predict different types of words varies greatly. While predicting prepositions can easily be done using relatively simple models with very little context knowledge, predicting named entities requires a deeper understanding of the context. Also, as opposed to selecting a random sentence from a text (as done in (Hill et al., 2015)), the questions can be formed from a specific part of a document, such as a short summary or a list of 908 CNN Daily Mail CBT CN CBT NE train valid test train valid test train valid test train valid test # queries 380,298 3,924 3,198 879,450 64,835 53,182 120,769 2,000 2,500 108,719 2,000 2,500 Max # options 527 187 396 371 232 245 10 10 10 10 10 10 Avg # options 26.4 26.5 24.5 26.5 25.5 26.0 10 10 10 10 10 10 Avg # tokens 762 763 716 813 774 780 470 448 461 433 412 424 Vocab. size 118,497 208,045 53,185 53,063 Table 1: Statistics on the 4 data sets used to evaluate the model. CBT CN stands for CBT Common Nouns and CBT NE stands for CBT Named Entites. Statistics were taken from (Hermann et al., 2015) and the statistics provided with the CBT data set. tags. Since such sentences often paraphrase in a condensed form what was said in the text, they are particularly suitable for testing text comprehension (Hermann et al., 2015). An important property of cloze style questions is that a large amount of such questions can be automatically generated from real world documents. This opens the task to data-hungry techniques such as deep learning. This is an advantage compared to smaller machine understanding datasets like MCTest (Richardson et al., 2013) that have only hundreds of training examples and therefore the best performing systems usually rely on handcrafted features (Sachan et al., 2015; Narasimhan and Barzilay, 2015). In the first part of this article we introduce the task at hand and the main aspects of the relevant datasets. Then we present our own model to tackle the problem. Subsequently we compare the model to previously proposed architectures and finally describe the experimental results on the performance of our model. 2 Task and datasets In this section we introduce the task that we are seeking to solve and relevant large-scale datasets that have recently been introduced for this task. 2.1 Formal Task Description The task consists of answering a cloze style question, the answer to which is dependent on the understanding of a context document provided with the question. The model is also provided with a set of possible answers from which the correct one is to be selected. This can be formalized as follows: The training data consist of tuples (q, d, a, A), where q is a question, d is a document that contains the answer to question q, A is a set of possible answers and a ∈A is the ground truth answer. Both q and d are sequences of words from vocabulary V . We also assume that all possible answers are words from the vocabulary, that is A ⊆V , and that the ground truth answer a appears in the document, that is a ∈d. 2.2 Datasets We will now briefly summarize important features of the datasets. 2.2.1 News Articles — CNN and Daily Mail The first two datasets1 (Hermann et al., 2015) were constructed from a large number of news articles from the CNN and Daily Mail websites. The main body of each article forms a context, while the cloze style question is formed from one of short highlight sentences, appearing at the top of each article page. Specifically, the question is created by replacing a named entity from the summary sentence (e.g. “Producer X will not press charges against Jeremy Clarkson, his lawyer says.”). Furthermore the named entities in the whole dataset were replaced by anonymous tokens which were further shuffled for each example so that the model cannot build up any world knowledge about the entities and hence has to genuinely rely on the context document to search for an answer to the question. Qualitative analysis of reasoning patterns needed to answer questions in the CNN dataset together with human performance on this task are provided in (Chen et al., 2016). 1The CNN and Daily Mail datasets are available at https://github.com/deepmind/rc-data 909 2.2.2 Children’s Book Test The third dataset2, the Children’s Book Test (CBT) (Hill et al., 2015), is built from books that are freely available thanks to Project Gutenberg3. Each context document is formed by 20 consecutive sentences taken from a children’s book story. Due to the lack of summary, the cloze style question is then constructed from the subsequent (21st) sentence. One can also see how the task complexity varies with the type of the omitted word (named entity, common noun, verb, preposition). (Hill et al., 2015) have shown that while standard LSTM language models have human level performance on predicting verbs and prepositions, they lack behind on named entities and common nouns. In this article we therefore focus only on predicting the first two word types. Basic statistics about the CNN, Daily Mail and CBT datasets are summarized in Table 1. 3 Our Model — Attention Sum Reader Our model called the Attention Sum Reader (AS Reader)4 is tailor-made to leverage the fact that the answer is a word from the context document. This is a double-edged sword. While it achieves stateof-the-art results on all of the mentioned datasets (where this assumption holds true), it cannot produce an answer which is not contained in the document. Intuitively, our model is structured as follows: 1. We compute a vector embedding of the query. 2. We compute a vector embedding of each individual word in the context of the whole document (contextual embedding). 3. Using a dot product between the question embedding and the contextual embedding of each occurrence of a candidate answer in the document, we select the most likely answer. 3.1 Formal Description Our model uses one word embedding function and two encoder functions. The word embedding 2The CBT dataset is available at http://www. thespermwhale.com/jaseweston/babi/ CBTest.tgz 3https://www.gutenberg.org/ 4An implementation of AS Reader is available at https: //github.com/rkadlec/asreader function e translates words into vector representations. The first encoder function is a document encoder f that encodes every word from the document d in the context of the whole document. We call this the contextual embedding. For convenience we will denote the contextual embedding of the i-th word in d as fi(d). The second encoder g is used to translate the query q into a fixed length representation of the same dimensionality as each fi(d). Both encoders use word embeddings computed by e as their input. Then we compute a weight for every word in the document as the dot product of its contextual embedding and the query embedding. This weight might be viewed as an attention over the document d. To form a proper probability distribution over the words in the document, we normalize the weights using the softmax function. This way we model probability si that the answer to query q appears at position i in the document d. In a functional form this is: si ∝exp (fi(d) · g(q)) (1) Finally we compute the probability that word w is a correct answer as: P(w|q, d) ∝ X i∈I(w,d) si (2) where I(w, d) is a set of positions where w appears in the document d. We call this mechanism pointer sum attention since we use attention as a pointer over discrete tokens in the context document and then we directly sum the word’s attention across all the occurrences. This differs from the usual use of attention in sequence-to-sequence models (Bahdanau et al., 2015) where attention is used to blend representations of words into a new embedding vector. Our use of attention was inspired by Pointer Networks (Ptr-Nets) (Vinyals et al., 2015). A high level structure of our model is shown in Figure 2. 3.2 Model instance details In our model the document encoder f is implemented as a bidirectional Gated Recurrent Unit (GRU) network (Cho et al., 2014; Chung et al., 2014) whose hidden states form the contextual word embeddings, that is fi(d) = −→ fi(d) || ←− fi(d), where || denotes vector concatenation and −→ fi and 910 Document Question Softmax 𝑠𝑖 over words in the document P Obama q, d = 𝑖∈𝐼(𝑂𝑏𝑎𝑚𝑎,d) 𝑠𝑖= 𝑠𝑗+ 𝑠𝑗+5 ….. Obama and Putin …… said Obama in Prague XXXXX visited Prague Probability of the answer …… …… 𝑓 𝑔 𝑔1 𝑔|𝒒| 𝑔|𝒒| 𝑔1 ….. ….. . . . . . . . Recurrent neural networks Dot products 𝑓𝑗 𝑓𝑗 𝑓|𝒅| 𝑓|𝒅| …… …… prob t Embeddings Input text ….. 𝑒(Obama) 𝑒and 𝑒(Putin) ….. 𝑒(said) 𝑒(Obama) 𝑒(in) 𝑒(Prague) 𝑒(XXXXX) 𝑒visited 𝑒(Prague) Figure 2: Structure of the model. ... what was supposed to be a fantasy sports car ride at @entity3 turned deadly when a @entity4 crashed into a guardrail . the crash took place sunday at the @entity8 , which bills itself as a chance to drive your dream car on a racetrack . the @entity4 ’s passenger , 36 year - old @entity14 of @entity15 , @entity16 , died at the scene , @entity13 said . the driver of the @entity4 , 24 - year - old @entity18 of @entity19 , @entity16 , lost control of the vehicle , the @entity13 said . ... officials say the driver , 24 - year - old @entity18 , lost control of a Figure 3: Attention in an example with anonymized entities where our system selected the correct answer. Note that the attention is focused only on named entities. ←− fi denote forward and backward contextual embeddings from the respective recurrent networks. The query encoder g is implemented by another bidirectional GRU network. This time the last hidden state of the forward network is concatenated with the last hidden state of the backward network to form the query embedding, that is g(q) = −→ g|q|(q) || ←− g1(q). The word embedding function e is implemented in a usual way as a look-up table V. V is a matrix whose rows can be indexed by words from the vocabulary, that is e(w) = Vw, w ∈V . Therefore, each row of V contains embedding of one word from the vocabulary. During training we jointly optimize parameters of f, g and e. ... @entity11 film critic @entity29 writes in his review that ”anyone nostalgic for childhood dreams of transformation will find something to enjoy in an uplifting movie that invests warm sentiment in universal themes of loss and resilience , experience and maturity . ” more : the best and worst adaptations of ”@entity” @entity43, @entity44 and @entity46 star in director @entity48’s crime film about a hit man trying to save his estranged son from a revenge plot. @entity11 chief film critic @entity52 writes in his review that the film ... stars in crime film about hit man trying to save his estranged son Figure 4: Attention over an example where our system failed to select the correct answer (entity43). The system was probably mislead by the co-occurring word ’film’. Namely, entity11 occurs 7 times in the whole document and 6 times it is together with the word ’film’. On the other hand, the correct answer occurs only 3 times in total and only once together with ’film’. 4 Related Work Several recent deep neural network architectures (Hermann et al., 2015; Hill et al., 2015; Chen et al., 2016; Kobayashi et al., 2016) were applied to the task of text comprehension. The last two architectures were developed independently at the same time as our work. All of these architectures use an attention mechanism that allows them to highlight places in the document that might be relevant to answering the question. We will now briefly describe these architectures and compare 911 them to our approach. 4.1 Attentive and Impatient Readers Attentive and Impatient Readers were proposed in (Hermann et al., 2015). The simpler Attentive Reader is very similar to our architecture. It also uses bidirectional document and query encoders to compute an attention in a similar way we do. The more complex Impatient Reader computes attention over the document after reading every word of the query. However, empirical evaluation has shown that both models perform almost identically on the CNN and Daily Mail datasets. The key difference between the Attentive Reader and our model is that the Attentive Reader uses attention to compute a fixed length representation r of the document d that is equal to a weighted sum of contextual embeddings of words in d, that is r = P i sifi(d). A joint query and document embedding m is then a non-linear function of r and the query embedding g(q). This joint embedding m is in the end compared against all candidate answers a′ ∈A using the dot product e(a′) · m, in the end the scores are normalized by softmax. That is: P(a′|q, d) ∝exp (e(a′) · m). In contrast to the Attentive Reader, we select the answer from the context directly using the computed attention rather than using such attention for a weighted sum of the individual representations (see Eq. 2). The motivation for such simplification is the following. Consider a context “A UFO was observed above our city in January and again in March.” and question “An observer has spotted a UFO in .” Since both January and March are equally good candidates, the attention mechanism might put the same attention on both these candidates in the context. The blending mechanism described above would compute a vector between the representations of these two words and propose the closest word as the answer - this may well happen to be February (it is indeed the case for Word2Vec trained on Google News). By contrast, our model would correctly propose January or March. 4.2 Chen et al. 2016 A model presented in (Chen et al., 2016) is inspired by the Attentive Reader. One difference is that the attention weights are computed with a bilinear term instead of simple dot-product, that is: si ∝exp (fi(d)⊺W g(q)). The document embedding r is computed using a weighted sum as in the Attentive Reader: r = P i sifi(d). In the end P(a′|q, d) ∝exp (e′(a′) · r), where e′ is a new embedding function. Even though it is a simplification of the Attentive Reader this model performs significantly better than the original. 4.3 Memory Networks MemNNs (Weston et al., 2014) were applied to the task of text comprehension in (Hill et al., 2015). The best performing memory networks model setup - window memory - uses windows of fixed length (8) centered around the candidate words as memory cells. Due to this limited context window, the model is unable to capture dependencies out of scope of this window. Furthermore, the representation within such window is computed simply as the sum of embeddings of words in that window. By contrast, in our model the representation of each individual word is computed using a recurrent network, which not only allows it to capture context from the entire document but also the embedding computation is much more flexible than a simple sum. To improve on the initial accuracy, a heuristic approach called self supervision is used in (Hill et al., 2015) to help the network to select the right supporting “memories” using an attention mechanism showing similarities to the ours. Plain MemNNs without this heuristic are not competitive on these machine reading tasks. Our model does not need any similar heuristics. 4.4 Dynamic Entity Representation The Dynamic Entity Representation model (Kobayashi et al., 2016) has a complex architecture also based on the weighted attention mechanism and max pooling over contextual embeddings of vectors for each named entity. 4.5 Pointer Networks Our model architecture was inspired by PtrNets (Vinyals et al., 2015) in using an attention mechanism to select the answer in the context rather than to blend words from the context into an answer representation. While a Ptr-Net consists of an encoder as well as a decoder, which uses the attention to select the output at each step, our model outputs the answer in a single step. Furthermore, 912 the pointer networks assume that no input in the sequence appears more than once, which is not the case in our settings. 4.6 Summary Our model combines the best features of the architectures mentioned above. We use recurrent networks to “read” the document and the query as done in (Hermann et al., 2015; Chen et al., 2016; Kobayashi et al., 2016) and we use attention in a way similar to Ptr-Nets. We also use summation of attention weights in a way similar to MemNNs (Hill et al., 2015). From a high level perspective we simplify all the discussed text comprehension models by removing all transformations past the attention step. Instead we use the attention directly to compute the answer probability. 5 Evaluation In this section we evaluate our model on the CNN, Daily Mail and CBT datasets. We show that despite the model’s simplicity its ensembles achieve state-of-the-art performance on each of these datasets. 5.1 Training Details To train the model we used stochastic gradient descent with the ADAM update rule (Kingma and Ba, 2015) and learning rate of 0.001 or 0.0005. During training we minimized the following negative log-likelihood with respect to θ: −logPθ(a|q, d) (3) where a is the correct answer for query q and document d, and θ represents parameters of the encoder functions f and g and of the word embedding function e. The optimized probability distribution P(a|q, d) is defined in Eq. 2. The initial weights in the word embedding matrix were drawn randomly uniformly from the interval [−0.1, 0.1]. Weights in the GRU networks were initialized by random orthogonal matrices (Saxe et al., 2014) and biases were initialized to zero. We also used a gradient clipping (Pascanu et al., 2012) threshold of 10 and batches of size 32. During training we randomly shuffled all examples in each epoch. To speedup training, we always pre-fetched 10 batches worth of examples and sorted them according to document length. Hence each batch contained documents of roughly the same length. For each batch of the CNN and Daily Mail datasets we randomly reshuffled the assignment of named entities to the corresponding word embedding vectors to match the procedure proposed in (Hermann et al., 2015). This guaranteed that word embeddings of named entities were used only as semantically meaningless labels not encoding any intrinsic features of the represented entities. This forced the model to truly deduce the answer from the single context document associated with the question. We also do not use pre-trained word embeddings to make our training procedure comparable to (Hermann et al., 2015). We did not perform any text pre-processing since the original datasets were already tokenized. We do not use any regularization since in our experience it leads to longer training times of single models, however, performance of a model ensemble is usually the same. This way we can train the whole ensemble faster when using multiple GPUs for parallel training. For Additional details about the training procedure see Appendix A. 5.2 Evaluation Method We evaluated the proposed model both as a single model and using ensemble averaging. Although the model computes attention for every word in the document we restrict the model to select an answer from a list of candidate answers associated with each question-document pair. For single models we are reporting results for the best model as well as the average of accuracies for the best 20% of models with best performance on validation data since single models display considerable variation of results due to random weight initialization5, even for identical hyperparameter values. Single model performance may consequently prove difficult to reproduce. What concerns ensembles, we used simple averaging of the answer probabilities predicted by ensemble members. For ensembling we used 14, 16, 84 and 53 models for CNN, Daily Mail and CBT CN and NE respectively. The ensemble models were chosen either as the top 70% of all trained models, we call this avg ensemble. Alternatively we use the following algorithm: We started with 5The standard deviation for models with the same hyperparameters was between 0.6-2.5% in absolute test accuracy. 913 CNN Daily Mail valid test valid test Attentive Reader † 61.6 63.0 70.5 69.0 Impatient Reader † 61.8 63.8 69.0 68.0 MemNNs (single model) ‡ 63.4 66.8 NA NA MemNNs (ensemble) ‡ 66.2 69.4 NA NA Dynamic Entity Repres. (max-pool) ♯ 71.2 70.7 NA NA Dynamic Entity Repres. (max-pool + byway)♯70.8 72.0 NA NA Dynamic Entity Repres. + w2v ♯ 71.3 72.9 NA NA Chen et al. (2016) (single model) 72.4 72.4 76.9 75.8 AS Reader (single model) 68.6 69.5 75.0 73.9 AS Reader (avg for top 20%) 68.4 69.9 74.5 73.5 AS Reader (avg ensemble) 73.9 75.4 78.1 77.1 AS Reader (greedy ensemble) 74.5 74.8 78.7 77.7 Table 2: Results of our AS Reader on the CNN and Daily Mail datasets. Results for models marked with † are taken from (Hermann et al., 2015), results of models marked with ‡ are taken from (Hill et al., 2015) and results marked with ♯are taken from (Kobayashi et al., 2016). Performance of ‡ and ♯models was evaluated only on CNN dataset. Named entity Common noun valid test valid test Humans (query) (∗) NA 52.0 NA 64.4 Humans (context+query) (∗) NA 81.6 NA 81.6 LSTMs (context+query) ‡ 51.2 41.8 62.6 56.0 MemNNs (window memory + self-sup.) ‡ 70.4 66.6 64.2 63.0 AS Reader (single model) 73.8 68.6 68.8 63.4 AS Reader (avg for top 20%) 73.3 68.4 67.7 63.2 AS Reader (avg ensemble) 74.5 70.6 71.1 68.9 AS Reader (greedy ensemble) 76.2 71.0 72.4 67.5 Table 3: Results of our AS Reader on the CBT datasets. Results marked with ‡ are taken from (Hill et al., 2015). (∗)Human results were collected on 10% of the test set. the best performing model according to validation performance. Then in each step we tried adding the best performing model that had not been previously tried. We kept it in the ensemble if it did improve its validation performance and discarded it otherwise. This way we gradually tried each model once. We call the resulting model a greedy ensemble. 5.3 Results Performance of our models on the CNN and Daily Mail datasets is summarized in Table 2, Table 3 shows results on the CBT dataset. The tables also list performance of other published models that were evaluated on these datasets. Ensembles of our models set the new state-of-the-art results on all evaluated datasets. Table 4 then measures accuracy as the proportion of test cases where the ground truth was among the top k answers proposed by the greedy ensemble model for k = 1, 2, 5. CNN and Daily Mail. The CNN dataset is the most widely used dataset for evaluation of text comprehension systems published so far. Perfor914 G G G G G G G G G G 0.69 0.72 0.75 0.78 0.81 400 800 1200 1600 Number of tokens in context document Test accuracy Dataset G Daily Mail CNN CNN & Daily Mail (a) G G G G G G G G G G 0.66 0.69 0.72 0.75 300 400 500 600 700 Number of tokens in context document Test accuracy Word type G Common Nouns Named Entities Children's Book Test (b) 0.00 0.04 0.08 0.12 0 500 1000 1500 2000 Number of tokens in context document Frequency in test data Dataset Daily Mail CNN CNN & Daily Mail (c) 0.00 0.05 0.10 0.15 0 300 600 900 Number of tokens in context document Frequency in test data Word type Common Nouns Named Entities Children's Book Test (d) Figure 5: Sub-figures (a) and (b) plot the test accuracy against the length of the context document. The examples were split into ten buckets of equal size by their context length. Averages for each bucket are plotted on each axis. Sub-figures (c) and (d) show distributions of context lengths in the four datasets. G G G G G G G G G G 0.5 0.6 0.7 0.8 0.9 20 40 60 Number of candidate answers Test accuracy Dataset G Daily Mail CNN CNN & Daily Mail (a) 0.00 0.05 0.10 0.15 0.20 0 25 50 75 100 Number of candidate answer entities Frequency in test data Dataset Daily Mail CNN CNN & Daily Mail (b) Figure 6: Subfigure (a) illustrates how the model accuracy decreases with an increasing number of candidate named entities. Subfigure (b) shows the overall distribution of the number of candidate answers in the news datasets. G G G G G G G G G 0.5 0.6 0.7 0.8 0.9 1 2 3 4 5 6 7 8 9 n Test accuracy Dataset G Daily Mail CNN CNN & Daily Mail (a) 0.00 0.05 0.10 0.15 1 2 3 4 5 6 7 8 9 n Frequency in test data Dataset Daily Mail CNN CNN & Daily Mail (b) Figure 7: Subfigure (a) shows the model accuracy when the correct answer is the nth most frequent named entity for n ∈[1, 10]. Subfigure (b) shows the number of test examples for which the correct answer was the n–th most frequent entity. The plots for CBT look almost identical (see Appendix B). 915 mance of our single model is a little bit worse than performance of simultaneously published models (Chen et al., 2016; Kobayashi et al., 2016). Compared to our work these models were trained with Dropout regularization (Srivastava et al., 2014) which might improve single model performance. However, ensemble of our models outperforms these models even though they use pre-trained word embeddings. On the CNN dataset our single model with best validation accuracy achieves a test accuracy of 69.5%. The average performance of the top 20% models according to validation accuracy is 69.9% which is even 0.5% better than the single best-validation model. This shows that there were many models that performed better on test set than the best-validation model. Fusing multiple models then gives a significant further increase in accuracy on both CNN and Daily Mail datasets.. CBT. In named entity prediction our best single model with accuracy of 68.6% performs 2% absolute better than the MemNN with self supervision, the averaging ensemble performs 4% absolute better than the best previous result. In common noun prediction our single models is 0.4% absolute better than MemNN however the ensemble improves the performance to 69% which is 6% absolute better than MemNN. Dataset k = 1 k = 2 k = 5 CNN 74.8 85.5 94.8 Daily Mail 77.7 87.6 94.8 CBT NE 71.0 86.9 96.8 CBT CN 67.5 82.5 95.4 Table 4: Proportion of test examples for which the top k answers proposed by the greedy ensemble included the correct answer. 6 Analysis To further analyze the properties of our model, we examined the dependence of accuracy on the length of the context document (Figure 5), the number of candidate answers (Figure 6) and the frequency of the correct answer in the context (Figure 7). On the CNN and Daily Mail datasets, the accuracy decreases with increasing document length (Figure 5a). We hypothesize this may be due to multiple factors. Firstly long documents may make the task more complex. Secondly such cases are quite rare in the training data (Figure 5b) which motivates the model to specialize on shorter contexts. Finally the context length is correlated with the number of named entities, i.e. the number of possible answers which is itself negatively correlated with accuracy (see Figure 6). On the CBT dataset this negative trend seems to disappear (Fig. 5c). This supports the later two explanations since the distribution of document lengths is somewhat more uniform (Figure 5d) and the number of candidate answers is constant (10) for all examples in this dataset. The effect of increasing number of candidate answers on the model’s accuracy can be seen in Figure 6a. We can clearly see that as the number of candidate answers increases, the accuracy drops. On the other hand, the amount of examples with large number of candidate answers is quite small (Figure 6b). Finally, since the summation of attention in our model inherently favours frequently occurring tokens, we also visualize how the accuracy depends on the frequency of the correct answer in the document. Figure 7a shows that the accuracy significantly drops as the correct answer gets less and less frequent in the document compared to other candidate answers. On the other hand, the correct answer is likely to occur frequently (Fig. 7a). 7 Conclusion In this article we presented a new neural network architecture for natural language text comprehension. While our model is simpler than previously published models, it gives a new state-of-the-art accuracy on all the evaluated datasets. An analysis by (Chen et al., 2016) suggests that on CNN and Daily Mail datasets a significant proportion of questions is ambiguous or too difficult to answer even for humans (partly due to entity anonymization) so the ensemble of our models may be very near to the maximal accuracy achievable on these datasets. Acknowledgments We would also like to thank Tim Klinger for providing us with masked softmax code that we used in our implementation. 916 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. International Conference on Learning Representations. Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas Bouchard, and Yoshua Bengio. 2012. Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop. Danqi Chen, Jason Bolton, and Christopher D. Manning. 2016. A Thorough Examination of the CNN / Daily Mail Reading Comprehension Task. In Association for Computational Linguistics (ACL). Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning Phrase Representations using RNN EncoderDecoder for Statistical Machine Translation. Empirical Methods in Natural Language Processing (EMNLP). Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2014. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. arXiv, pages 1–9. David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya a. Kalyanpur, Adam Lally, J. William Murdock, Eric Nyberg, John Prager, Nico Schlaefer, and Chris Welty. 2010. Building Watson: An Overview of the DeepQA Project. AI Magazine, 31(3):59–79. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1684– 1692. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading children’s books with explicit memory representations. arXiv preprint arXiv:1511.02301. Diederik P. Kingma and Jimmy Lei Ba. 2015. Adam: a Method for Stochastic Optimization. International Conference on Learning Representations, pages 1– 13. Sosuke Kobayashi, Ran Tian, Naoaki Okazaki, and Kentaro Inui. 2016. Dynamic Entity Representation with Max-pooling Improves Machine Reading. Proceedings of the North American Chapter of the Association for Computational Linguistics and Human Language Technologies (NAACL-HLT). Karthik Narasimhan and Regina Barzilay. 2015. Machine Comprehension with Discourse Relations. Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1253–1262. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2012. On the difficulty of training recurrent neural networks. Proceedings of The 30th International Conference on Machine Learning, pages 1310–1318. Matthew Richardson, Christopher J C Burges, and Erin Renshaw. 2013. MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Text. Empirical Methods in Natural Language Processing (EMNLP), pages 193–203. Mrinmaya Sachan, Avinava Dubey, Eric P Xing, and Matthew Richardson. 2015. Learning AnswerEntailing Structures for Machine Comprehension. Association for Computational Linguistics (ACL), pages 239–249. Andrew M Saxe, James L Mcclelland, and Surya Ganguli. 2014. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. International Conference on Learning Representations. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: prevent NN from overfitting. Journal of Machine Learning Research, 15:1929–1958. Wilson L Taylor. 1953. Cloze procedure: a new tool for measuring readability. Journalism and Mass Communication Quarterly, 30(4):415. Bart van Merrienboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy Serdyuk, David Warde-farley, Jan Chorowski, and Yoshua Bengio. 2015. Blocks and Fuel : Frameworks for deep learning. pages 1–5. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems, pages 2674–2682. Jason Weston, Sumit Chopra, and Antoine Bordes. 2014. Memory networks. arXiv preprint arXiv:1410.3916. 917 Appendix A Training Details During training we evaluated the model performance after each epoch and stopped the training when the error on the validation set started increasing. The models usually converged after two epochs of training. Time needed to complete a single epoch of training on each dataset on an Nvidia K40 GPU is shown in Table 5. Dataset Time per epoch CNN 10h 22min Daily Mail 25h 42min CBT Named Entity 1h 5min CBT Common Noun 0h 56min Table 5: Average duration of one epoch of training on the four datasets. The hyperparameters, namely the recurrent hidden layer dimension and the source embedding dimension, were chosen by grid search. We started with a range of 128 to 384 for both parameters and subsequently kept increasing the upper bound by 128 until we started observing a consistent decrease in validation accuracy. The region of the parameter space that we explored together with the parameters of the model with best validation accuracy are summarized in Table 6. Rec. Hid. Layer Embedding Dataset min max best min max best CNN 128 512 384 128 512 128 Daily Mail 128 1024 512 128 512 384 CBT NE 128 512 384 128 512 384 CBT CN 128 1536 256 128 512 384 Table 6: Dimension of the recurrent hidden layer and of the source embedding for the best model and the range of values that we tested. We report number of hidden units of the unidirectional GRU; the bidirectional GRU has twice as many hidden units. Our model was implemented using Theano (Bastien et al., 2012) and Blocks (van Merrienboer et al., 2015). Appendix B Dependence of accuracy on the frequency of the correct answer In Section 6 we analysed how the test accuracy depends on how frequent the correct answer is compared to other answer candidates for the news datasets. The plots for the Children’s Book Test looks very similar, however we are adding it here for completeness. G G G G G G 0.4 0.5 0.6 0.7 0.8 1 2 3 4 5 6 n Test accuracy Word type G Common Nouns Named Entities Children's Book Test (a) 0.00 0.05 0.10 0.15 0.20 1 2 3 4 5 6 n Frequency in test data Word type Common Nouns Named Entities Children's Book Test (b) Figure 8: Subfigure (a) shows the model accuracy when the correct answer is among n most frequent named entities for n ∈[1, 10]. Subfigure (b) shows the number of test examples for which the correct answer was the n–th most frequent entity. 918
2016
86
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 919–929, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Investigating LSTMs for Joint Extraction of Opinion Entities and Relations Arzoo Katiyar and Claire Cardie Department of Computer Science Cornell University Ithaca, NY, 14853, USA arzoo, [email protected] Abstract We investigate the use of deep bidirectional LSTMs for joint extraction of opinion entities and the IS-FROM and ISABOUT relations that connect them — the first such attempt using a deep learning approach. Perhaps surprisingly, we find that standard LSTMs are not competitive with a state-of-the-art CRF+ILP joint inference approach (Yang and Cardie, 2013) to opinion entities extraction, performing below even the standalone sequencetagging CRF. Incorporating sentence-level and a novel relation-level optimization, however, allows the LSTM to identify opinion relations and to perform within 1– 3% of the state-of-the-art joint model for opinion entities and the IS-FROM relation; and to perform as well as the state-of-theart for the IS-ABOUT relation — all without access to opinion lexicons, parsers and other preprocessing components required for the feature-rich CRF+ILP approach. 1 Introduction There has been much research in recent years in the area of fine-grained opinion analysis where the goal is to identify subjective expressions in text along with their associated sources and targets. More specifically, fine-grained opinion analysis aims to identify three types of opinion entities: • opinion expressions, O, which are direct subjective expressions (i.e., explicit mentions of otherwise private states or speech events expressing private states (Wiebe and Cardie, 2005)); • opinion targets, T, which are the entities or topics that the opinion is about; and • opinion holders, H, which are the entities expressing the opinion. In addition, the task involves identifying the ISFROM and IS-ABOUT relations between an opinion expression and its holder and target, respectively. In the sample sentences, numerical subscripts indicate an IS-FROM or IS-ABOUT relation. S1 [The sale]T1 [infuriated]O1 [Beijing]H1,2 which [regards]O2 [Taiwan]T2 an integral part of its territory awaiting reunification, by force if necessary. S2 “[Our agency]T1,H2 [seriously needs]O2 [equipment for detecting drugs]T2,” [he]H1 [said]O1. In S1, for example, “infuriated” indicates that there is an (negative) opinion from “Beijing” regarding “the sale.”1 Traditionally, the task of extracting opinion entities and opinion relations was handled in a pipelined manner, i.e., extracting the opinion expressions first and then extracting opinion targets and opinion holders based on their syntactic and semantic associations with the opinion expressions (Kim and Hovy, 2006; Kobayashi et al., 2007). More recently, methods that jointly infer the opinion entity and relation extraction tasks (e.g., using Integer Linear Programming (ILP)) have been introduced (Choi et al., 2006; Yang and Cardie, 2013) and show that the existence of opinion relations provides clues for the identification of opinion entities and vice-versa, and thus results in better performance than a pipelined approach. However, the success of these methods depends critically on the availability of opinion lexicons, dependency parsers, named-entity taggers, etc. 1This paper does not attempt to determine the sentiment, i.e., the positive or negative polarity, of an opinion. 919 Alternatively, neural network-based methods have been employed. In these approaches, the required latent features are automatically learned as dense vectors of the hidden layers. Liu et al. (2015), for example, compare several variations of recurrent neural network methods and find that long short-term memory networks (LSTMs) perform the best in identifying opinion expressions and opinion targets for the specific case of product/service reviews. Motivated by the recent success of LSTMs on this and other problems in NLP, we investigate here the use of deep bi-directional LSTMs for joint extraction of opinion expressions, holders, targets and the relations that connect them. This is the first attempt to handle the full opinion entity and relation extraction task using a deep learning approach. In experiments on the MPQA dataset for opinion entities (Wiebe and Cardie, 2005; Wilson, 2008), we find that standard LSTMs are not competitive with the state-of-the-art CRF+ILP joint inference approach of Yang and Cardie (2013), performing below even the standalone sequencetagging CRF. Inspired by Huang et al. (2015), we show that incorporating sentence-level, and our newly proposed relation-level optimization, allows the LSTM to perform within 1–3% of the ILP joint model for all three opinion entity types and to do so without access to opinion lexicons, parsers or other preprocessing components. For the primary task of identifying opinion entities together with their IS-FROM and IS-ABOUT relations, we show that the LSTM with sentenceand relation-level optimizations outperforms an LSTM baseline that does not employ joint inference. When compared to the CRF+ILP-based joint inference approach, the optimized LSTM performs slightly better for the IS-ABOUT2 relation and within 3% for the IS-FROM relation. In the sections that follow, we describe: related work (Section 2) and the multi-layer bi-directional LSTM (Section 3); the LSTM extensions (Section 4); the experiments on the MPQA corpus (Sections 5 and 6) and error analysis (Section 7). 2Target and IS-ABOUT relation identification is one important aspect of opinion analysis that hasn’t been much addressed in previous work and has proven to be difficult for existing methods. 2 Related Work LSTM-RNNs (Hochreiter and Schmidhuber, 1997) have recently been applied to many sequential modeling and prediction tasks, such as machine translation (Bahdanau et al., 2014; Sutskever et al., 2014), speech recognition (Graves et al., 2013), NER (Hammerton, 2003). The bi-directional variant of RNNs has been found to perform better as it incorporates information from both the past and the future (Schuster and Paliwal, 1997; Graves et al., 2013). Deep RNNs (stacked RNNs) (Schmidhuber, 1992; Hihi and Bengio, 1996) capture more abstract and higher-level representation in different layers and benefit sequence modeling tasks (˙Irsoy and Cardie, 2014). Collobert et al. (2011) found that adding dependencies between the tags in the output layer improves the performance of Semantic Role Labeling task. Later, Huang et al. (2015) also found that adding a CRF layer on top of bi-directional LSTMs to capture these dependencies can produce state-of-the-art performance on part-of-speech (POS), chunking and NER. For fine-grained opinion extraction, earlier work (Wilson et al., 2005; Breck et al., 2007; Yang and Cardie, 2012) focused on extracting subjective phrases using a CRF-based approach from opendomain text such as news articles. Choi et al. (2005) extended the task to jointly extract opinion holders and these subjective expressions. Yang and Cardie (2013) proposed a ILP-based jointinference model to jointly extract the opinion entities and opinion relations, which performed better than the pipelined based approaches (Kim and Hovy, 2006). In the neural network domain, ˙Irsoy and Cardie (2014) proposed a deep bi-directional recurrent neural network for identifying subjective expressions, outperforming the previous CRF-based models. Irsoy and Cardie (2013) additionally proposed a bi-directional recursive neural network over a binary parse tree to jointly identify opinion entities, but performed significantly worse than the feature-rich CRF+ILP approach of Yang and Cardie (2013). Liu et al. (2015) used several variants of recurrent neural networks for joint opinion expression and aspect/target identification on customer reviews for restaurants and laptops, outperforming the feature-rich CRF based baseline. In the product reviews domain, however, the opinion holder is generally the reviewer and the task 920 does not involve identification of relations between opinion entities. Hence, standard LSTMs are applicable in this domain. None of the above neural network based models can jointly model opinion entities and opinion relations. In the relation extraction domain, several neural networks have been proposed for relation classification, such as RNN-based models (Socher et al., 2012) and LSTM-based models (Xu et al., 2015). These models depend on constituent or dependency tree structures for relation classification, and also do not model entities jointly. Recently, Miwa and Bansal (2016) proposed a model to jointly represent both entities and relations with shared parameters, but it is not a joint-inference framework. 3 Methodology For our task, we propose the use of multi-layer bi-directional LSTMs, a type of recurrent neural network. Recurrent neural networks have recently been used for modeling sequential tasks. They are capable of modeling sequences of arbitrary length by repetitive application of a recurrent unit along the tokens in the sequence. However, recurrent neural networks are known to have several disadvantages like the problem of vanishing and exploding gradients. Because of these problems, it has been found that recurrent neural networks are not sufficient for modeling long term dependencies. Hochreiter and Schmidhuber (1997), thus proposed long short term memory (LSTMs), a variant of recurrent neural networks. 3.1 Long Short Term Memory (LSTM) Long short term memory networks are capable of learning long-term dependencies. The recurrent unit is replaced by a memory block. The memory block contains two cell states – memory cell Ct and hidden state ht; and three multiplicative gates – input gate it, forget gate ft and output gate ot. These gates regulate the addition or removal of information to the cell state thus overcoming vanishing and exploding gradients. ft = σ(Wfxt + Ufht−1 + bf) it = σ(Wixt + Uiht−1 + bi) The forget gate ft and input gate it above decides what part of the information we are going to throw away from the cell state and what new information we are going to store in the cell state. The sigmoid outputs a number between 0 and 1 where 0 implies that the information is completely lost and 1 means that the information is completely retained. eCt = tanh(Wcxt + Ucht−1 + bc) Ct = it ∗eCt + ft ∗Ct−1 Thus, the intermediate cell state eCt and previous cell state Ct−1 are used to update the new cell state Ct. ot = σ(Woxt + Uoht−1 + VoCt + bo) ht = ot ∗tanh(Ct) Next, we update the hidden state ht based on the output gate ot and the cell state Ct. We pass both the cell state Ct and the hidden state ht to the next time step. 3.2 Multi-layer Bi-directional LSTM In sequence tagging problems, it has been found that only using past information for computing the hidden state ht may not be sufficient. Hence, previous works (Graves et al., 2013; ˙Irsoy and Cardie, 2014) proposed the use of bi-directional recurrent neural networks for speech and NLP tasks, respectively. The idea is to also process the sequence in the backward direction. Hence, we can compute the hidden state −→ ht in the forward direction and ←− ht in the backward direction for every token. Also, in more traditional feed-forward networks, deep networks have been found to learn abstract and hierarchical representations of the input in different layers (Bengio, 2009). The multilayer LSTMs have been proposed (Hermans and Schrauwen, 2013) to capture long-term dependencies of the input sequences in different layers. For the first hidden layer, the computation proceeds similar to that described in Section 3.1. However, for higher hidden layers i the input to the memory block is the hidden state and memory cell from the previous layer i −1 instead of the input vector representation. For this paper, we only use the hidden state from the last layer L to compute the output state yt. zt = −→ V −→ ht(L) + ←− V ←− ht(L) + c yt = g(zt) 4 Network Training For our problem, we wish to predict a label y from a discrete set of classes Y for every word in a sentence. As is the norm, we train the network by 921 maximizing the log-likelihood X (x,y)∈T log p(y|x, θ) over the training data T, with respect to the parameters θ, where x is the input sentence and y is the corresponding tag sequence. We propose three alternatives for the log-likelihood computation. 4.1 Word-Level Log-Likelihood (WLL) We first formulate a word-level log-likelihood (WLL) (adapted from Collobert et al. (2011)) that considers all words in a sentence independently. We interpret the score zt corresponding to the ith tag [zt]i as a conditional tag probability log p(i|x, θ) by applying a softmax operation. p(i|x, θ) = softmax(zi t) = ezi t P j ezj t For the tag sequence y given the input sentence x the log-likelihood is : log p(y|x, θ) = zy −logadd j zj 4.2 Sentence-Level Log-Likelihood (SLL) In the word-level approach above, we discard the dependencies between the tags in a tag sequence. In our sentence-level log-likelihood (SLL) formulation (also adapted from Collobert et al. (2011)) we incorporate these dependencies: we introduce a transition score [A]i,j for jumping from tag i to tag j of adjacent words in the tag sequence to the set of parameters eθ. These transition scores are going to be trained. We use both the transition scores [A] and the output scores z to compute the sentence score s(x|T t=1, y|T t=1, eθ). s(x, y, eθ) = T X t=1  [A]yt−1,yt + zyt t  We normalize this sentence score over all possible paths of tag sequences ey to get the log conditional probability as below : log psent(y|x, eθ) = s(x, y, eθ) −logadd ey s(x, ey, eθ) Even though the number of tag sequences grows exponentially with the length of the sentence, we can compute the normalization factor in linear time (Collobert et al., 2011). At inference time, we find the best tag sequence argmax ey s(x, ey, eθ) for an input sentence x using Viterbi decoding. In this case, we basically maximize the same likelihood as in a CRF except that a CRF is a linear model. The above sentence-level log-likelihood is useful for sequential tagging, but it cannot be directly used for modeling relations between non-adjacent words in the sentence. In the next subsection, we extend the above idea to also model relations between non-adjacent words. 4.3 Relation-Level Log-Likelihood (RLL) For every word xt in the sentence x, we output the tag yt and a distance dt. If a word at position t is related to a word at position k and k < t, then dt = (t −k). If word t is not related to any other word to its left, then dt = 0. Let DLeft be the maximum distance we model for such left-relations 3. zt = −→ Vr −→ ht(L) + ←− Vr ←− ht(L) + cr We let −→ Vr ∈R(DLeft+1)×Y ×dh (where dh is the dimensionality of hidden units) such that the output state zt ∈R(DLeft+1)×Y as compared to zt ∈ R(1)×Y in case of sentence-level log-likelihood. In order to add dependencies between tags and relations, we introduce a transition score [A]i,j,d′,d” for jumping from tag i and relation distance d ′ to tag j and relation distance d” of adjacent words in the tag sequence, to the set of parameters θ ′. These transition scores are also going to be trained similar to the transition scores in sentence-level log-likelihood. The sentence score s(x|T t=1, y|T t=1, d|T t=1, θ ′) is: s(x, y, d, θ ′) = T X t=1  [A]yt−1,yt,dt−1,dt + zyt,dt t  We normalize this sentence score over all possible paths of tag ey and relation sequences ed to get the log conditional probability as below : log prel,Left(y, d|x, eθ) =s(x, y, d, θ ′) −logadd ey, ed s(x, ey, ed, θ ′) 3Later in this section, we will also add a similar likelihood in the objective function for right-relations, i.e., for each word the related words are in its right context. 922 The sale infuriated Beijing which regards Taiwan an integral part ... Entity tags B T I T B O B H O B O B T O O O ... Left Rel (dleft) 0 0 0 0 0 2 1 0 0 0 ... Right Rel (dright) 2 1 1 0 0 0 0 0 0 0 ... IS-ABOUT IS-FROM IS-FROM IS-ABOUT Figure 1: Gold standard annotation for an example sentence from MPQA dataset. O represents the ‘Other’ tag in the BIO scheme. We can still compute the normalization factor in linear time similar to sentence-level loglikelihood. At inference time, we jointly find the best tag and relation sequence argmax ey, ed s(x, ey, ed, θ ′) for an input sentence x using Viterbi decoding. For our task of joint extraction of opinion entities and relations, we train our model to predict tag y and relation distance d for every word in the sentence by maximizing the log-likelihood (SLL+RLL) below using Adadelta (Zeiler, 2012). X (x,y)∈T log psent(y|x, θ ′)+ log prel,Left(y, d|x, θ ′) + log prel,Right(y, d|x, θ ′) 5 Experiments 5.1 Data We use the MPQA 2.0 corpus (Wiebe and Cardie, 2005; Wilson, 2008). It contains news articles and editorials from a wide variety of news sources. There are a total of 482 documents in our dataset containing 9471 sentences with phrase-level annotations. We set aside 132 documents as a development set and use the remaining 350 documents as the evaluation set. We report the results using 10-fold cross validation at the document level to mimic the methodology of Yang and Cardie (2013). The dataset contains gold-standard annotations for opinion entities — expressions, targets, holders. We use only the direct subjective/opinion expressions. There are also annotations for opinion relations – IS-FROM between opinion holders and opinion expressions; and IS-ABOUT between opinion targets and opinion expressions. These relations can overlap but we discard all relations that contain sub-relations similar to Yang and Cardie (2013). We also leave identification of overlapping relations for future work. Figure 1 gives an example of an annotated sentence from the dataset: boxes denote opinion entities and opinion relations are shown by arcs. We interpret these relations arcs as directed — from an opinion expression towards an opinion holder, and from an opinion target towards an opinion expression. In order to use the RLL formulation as defined in Section 4.3, we pre-process these relation arcs to obtain the left-relation distances (dleft) and right-relation distances (dright) as shown in Figure 1. For each word in an entity, we find its distance to the nearest word in the related entity. These distances become our relation tags. The entity tags are interpreted using the BIO scheme, also shown in the figure. Our RLL model jointly models the entity tags and relation tags. At inference time, these entity tags and relation tags are used together to determine IS-FROM and IS-ABOUT relations. We use a simple majority vote to determine the final entity tag from SLL+RLL model. 5.2 Evaluation Metrics We use precision, recall and F-measure (as in Yang and Cardie (2013)) as evaluation metrics. Since the identification of exact boundaries for opinion entities is hard even for humans (Wiebe and Cardie, 2005), soft evaluation methods such as Binary Overlap and Proportional Overlap are reported. Binary Overlap counts every overlapping predicted and gold entity as correct, while Proportional Overlap assigns a partial score proportional to the ratio of overlap span and the correct span (Recall) or the ratio of overlap span and the predicted span (Precision). For the case of opinion relations, we report precision, recall and F-measure according to the Binary Overlap. It considers a relation correct if there is an overlap between the predicted opin923 Opinion Expression Opinion Target Opinion Holder Method P R F1 P R F1 P R F1 CRF 84.423.24 61.613.20 71.172.66 80.382.72 46.804.41 59.104.06 73.374.09 49.713.46 59.213.49 CRF+ILP 73.533.90 74.892.51 74.112.49 77.273.49 56.943.94 65.403.07 67.003.17 67.223.50 67.222.54 LSTM+WLL 67.884.49 66.133.20 66.872.66 58.714.87 54.923.23 56.501.51 60.334.54 63.342.33 61.652.37 LSTM+SLL 70.455.12 66.653.46 68.373.14 63.024.61 56.773.98 59.653.61 61.853.82 63.123.59 62.352.46 LSTM+SLL+RLL 71.735.35 70.923.96 71.112.71 64.525.52 65.944.74 64.841.44 62.753.75 67.174.37 64.712.23 CRF 80.783.27 57.623.24 67.192.63 71.813.22 42.363.78 53.233.69 71.563.54 48.613.51 57.863.43 CRF+ILP 71.034.03 69.722.37 70.222.44 71.943.25 49.833.24 58.722.80 65.703.07 65.913.63 65.682.61 LSTM+WLL 64.474.79 59.453.52 61.672.26 52.725.01 44.212.54 47.851.41 58.414.72 59.722.52 52.452.23 LSTM+SLL 65.975.46 61.763.69 63.603.05 54.464.49 50.164.38 52.013.05 59.803.29 61.273.75 60.402.26 LSTM+SLL+RLL 65.484.92 65.543.65 65.562.71 52.756.81 60.544.78 55.811.96 59.443.56 65.514.22 62.182.50 Table 1: Performance on opinion entity extraction. Top table shows Binary Overlap performance; bottom table shows Proportional Overlap performance. Superscripts designate one standard deviation. ion expression and the gold opinion expression as well as an overlap between the predicted entity (holder/target) and the gold entity (holder/target). 5.3 Baselines CRF+ILP. We use the ILP-based joint inference model (Yang and Cardie, 2013) as baseline for both the entity and relation extraction tasks. It represents the state-of-the-art for fine-grained opinion extraction. Their method first identifies opinion entities using CRFs (an additional baseline) with a variety of features such as words, POS tags, and lexicon features (the subjectivity strength of the word in the Subjectivity Lexicon). They also train a relation classifier (logistic regression) by over-generating candidates from the CRFs (50best paths) using local features such as word, POS tags, subjectivity lexicons as well as semantic and syntactic features such as semantic frames, dependency paths, WordNet hypernyms, etc. Finally, they use ILP for joint-inference to find the optimal prediction for both opinion entity and opinion relation extraction. LSTM+SLL+Softmax. As an additional baseline for relation extraction, we train a softmax classifier on top of our SLL framework. We jointly learn the relation classifier and SLL model. For every entity pair [x]j i, [x]l k, we first sum the start and end word output representation [zt] and then concatenate them to learn softmax weight W ′ where W ′ ∈R3×2dh. yrel = softmax(W ′ [zt]i + [zt]j [zt]k + [zt]l  ) The inference is pipelined in this case. At the time of inference, we first predict the entity spans and then use these spans for relation classification. 5.4 Hyperparameter and Training Details We use multi-layer bi-directional LSTMs for all the experiments such that the number of hidden layers is 3 and the dimensionality of hidden units (dh) is 50. We use Adadelta for training. We initialize our word representation using publicly available word2vec (Mikolov et al., 2013) trained on Google News dataset and keep them fixed during training. For RLL, we keep DLeft and DRight as 15. All the weights in the network are initialized from small random uniform noise. We train all our models for 200 epochs. We do not pretrain our network. We regularize our network using dropout (Srivastava et al., 2014) with the dropout rate tuned using the development set. We select the final model based on development-set performance (average of Proportional Overlap for entities and Binary Overlap for relations). 6 Results 6.1 Opinion Entities Table 1 shows the performance of opinion entity identification using the Binary Overlap and Proportional Overlap evaluation metrics. We discuss specific results in the paragraphs below. WLL vs. SLL. SLL performs better than WLL on all entity types, particularly with respect to Proportional Overlap on opinion holder and target entities. A similar trend can be seen for the example sentences in Table 3. In S1, SLL extracts “has been in doubt” as the opinion expression whereas WLL only identifies “has”. Similarly in S2, WLL annotates “Saudi Arabia’s request on a case-bycase” as the target while SLL correctly includes “basis” in its annotation. Thus, we find that modeling the transitions between adjacent tags enables 924 IS-ABOUT IS-FROM Method P R F1 P R F1 CRF+ILP 61.574.56 47.653.12 54.392.49 64.043.08 58.794.42 61.173.02 LSTM+SLL+Softmax 36.235.10 36.127.75 35.403.35 36.445.26 40.196.13 37.603.42 LSTM+SLL+RLL 62.483.87 49.802.84 54.982.54 64.193.81 53.756.00 58.223.01 Table 2: Performance on opinion relation extraction using Binary Overlap on the opinion entities. Superscripts designate one standard deviation. SLL to find entire opinion entity phrases better than WLL, leading to better Proportional Overlap scores. SLL vs. SLL+RLL. From Table 1, we see that the joint-extraction model (SLL+RLL) performs better than SLL as expected. More specifically, SLL+RLL model has better recall for all opinion entity types. The example sentences from Table 3 corroborate these results. In S1, SLL+RLL identifies “announced” as an opinion expression, which was missing in both WLL and SLL. In S3, neither the WLL nor the SLL model can annotate opinion holder (H1) or the target (T1), but SLL+RLL correctly identifies the opinion entities because of modeling the relations between the opinion expression “will decide” and the holder/target entities. CRF vs. LSTM-based Models. From the analysis of the performance in Table 1, we find that our WLL and SLL models perform worse while our best SLL+RLL model can only match the performance of the CRF baseline on opinion expressions. Even though the recall of all our LSTMbased models is higher than the recall of the CRFbaseline for opinion expressions, we cannot match the precision of CRF baseline. We suspect that the reason for such high precision on the part of the CRF is its access to carefully prepared subjectivity-lexicons4. Our LSTM-based models do not rely on such features except via the wordvectors. With respect to holders and targets, we find that our SLL model performs similar to the CRF baseline. However, the SLL+RLL model outperforms CRF baseline. CRF+ILP vs. SLL+RLL. Even though we find that our LSTM-based joint-model (SLL+RLL) outperforms our LSTM-based only-entity extraction model (SLL), the performance is still below the ILP-based joint-model (CRF+ILP). However, we perform comparably with respect to target en4http://mpqa.cs.pitt.edu/lexicons/ subj lexicon/ tities (Binary Overlap). Also, our recall on targets is much better than all other models whereas the recall on holders is very similar to CRF+ILP. Our SLL+RLL model can identify targets such as “Australia’s involvement in Kyoto” which the ILPbased model cannot, as observed for S1 in Table 3. In S3, the ILP-based model also erroneously divides the target “consider Saudi Arabia’s request on a case-by-case basis” into a holder “Saudi Arabia’s” and opinion expression “request”, while SLL+RLL model can correctly identify it. We will compare the two models in detail in Section 7. 6.2 Opinion Relations The extraction of opinion relations is our primary task. Table 25 shows the performance on opinion relation extraction task using Binary Overlap. SLL+Softmax vs. SLL+RLL. The opinion entities and relations are jointly modeled in both the models, but we see a significant improvement in performance by adding relation level dependencies to the model vs. learning a classifier on top of sentence-level dependencies to learn the relation between entities. LSTM+SLL+RLL performs much better in terms of both precision and recall on both IS-FROM and IS-ABOUT relations. CRF+ILP vs. SLL+RLL. We find that our SLL+RLL model performs comparably and even slightly better on IS-ABOUT relations. Such performance is encouraging because our LSTMbased model does not rely on features such as dependency paths, semantic frames or subjectivity lexicons for our model. Our sequential LSTM model is able to learn these relations thus validating that LSTMs can model long-term dependencies. However, for IS-FROM relations, we find that our recall is lower than the ILP-based joint model. 5Yang and Cardie (2013) omitted a subset of targets and IS-ABOUT relations. We fixed this and re-ran their models on the updated dataset, obtaining the lower F-score 54.39 for IS-ABOUT relations. 925 S1 : [Australia’s involvement in Kyoto]T1 [has been in doubt]O1 ever since [the US President, George Bush]H2, [announced]O2 last year that [ratifying the protocol]T2 would hurt the US economy. CRF+ILP Australia’s involvement in Kyoto [has been in doubt]O1 ever since the US President, George Bush, announced last year that [ratifying the protocol]T1 would hurt the US economy. WLL [Australia’s involvement in Kyoto]T [has]O been in doubt ever since the US [President]H, [George Bush]H, announced last year that ratifying the protocol would hurt the US economy. SLL [Australia’s involvement in Kyoto]T [has been in doubt]O ever since the US President, George Bush, announced last year that ratifying the protocol would hurt the US economy. SLL+RLL [Australia’s involvement in Kyoto]T [has been in doubt]O ever since the US President, [George Bush]H2, [announced]O2 last year that [ratifying the protocol]T2 would hurt the US economy. S2 : Bush said last week [he]H1,2 [was willing]O1 [to consider]O2 [Saudi Arabia’s request on a case-by-case basis]T2 but [U.S. officials]H3 [doubted]O3 [it would happen any time soon]T3. CRF+ILP [Bush]H1 [said]O1 last week [he]H2 [was willing to consider]O2 [Saudi Arabia’s]H3 [request]O3 on a case-by-case basis but [U.S. officials]H4 [doubted]O4 [it]T4 would happen any time soon. WLL Bush said last week [he]H [was willing]O to [consider]O [Saudi Arabia’s request on a case-by-case]T basis but [U.S. officials]H [doubted]O [it]T would [happen any time soon]T. SLL Bush said last week [he]H [was willing]O to [consider Saudi Arabia’s request on a case-by-case basis]T but [U.S. officials]H [doubted]O [it]T would happen any time soon. SLL+RLL Bush said last week [he]H1 [was willing to consider]O1 [Saudi Arabia’s request on a case-by-case basis]T1 but [U.S. officials]H2 [doubted]O2 [it would happen any time soon]T2. S3 : Hence, [the Organization of Petroleum Exporting Countries (OPEC)]H1, [will decide]O1 at its meeting on Wednesday [whether or not to cut its worldwide crude production in an effort to shore up energy prices]T1. CRF+ILP Hence, the Organization of Petroleum Exporting Countries (OPEC), [will decide]O1 at its meeting on Wednesday whether [or not to cut its worldwide crude production in an effort to shore up energy prices]T1. WLL Hence, the Organization of Petroleum Exporting Countries (OPEC), will [decide]O at its meeting on Wednesday whether or not to cut its worldwide crude production in an effort to shore up energy prices. SLL Hence, the Organization of Petroleum Exporting Countries (OPEC), [will decide]O at its meeting on Wednesday whether or not to cut its worldwide crude production in an effort to shore up energy prices. SLL+RLL Hence, [the Organization of Petroleum Exporting Countries (OPEC)]H1, [will decide]O1 at its meeting on Wednesday whether [or not to cut its worldwide crude production in an effort to shore up energy prices]T1. Table 3: Output from different models. The first row for each example is the gold standard. 7 Discussion In this section, we discuss the various advantages and disadvantages of the LSTM-based SLL+RLL model as compared to the jointinference (CRF+ILP) model. We provide examples from the dataset in Table 4. From Table 2, we find that SLL+RLL model performs worse with respect to the opinion expression entities and opinion holder entities. On careful analysis of the output, we found cases such as S1 in Table 4. For such sentences SLL+RLL model prefers to annotate the opinion target (T3) “US requests for more oil exports”, whereas the ILP model annotates the embedded opinion holder (H4) “US” and opinion expression (T4) “requests”. Both models are valid with respect to the gold-standard. In order to simplify our problem, we discard these embedded relations during training similar to Yang and Cardie (2013). However, for future work we would like to model these overlapping relations which could potentially improve our performance on opinion holders and opinion expressions. We also found several cases such as S2, where the SLL+RLL model fails to annotate “said” as an opinion expression. The gold standard opinion expressions include speech events like “said” or “a statement”, but not all occurrences of these speech events are opinion expressions, some are merely objective events. In S2, “was martyred” is an indication of an opinion being expressed, so “said” is annotated as an opinion expression. From our observation, the ILP model is more relaxed in annotating most of these speech events as opinion expressions and thus likely to identify corresponding 926 S1 : However, [Chavez]T1 who [is known for]O1 [his]H2 [ala Fidel Castro left-leaning anti-American philosophy]O2 had on a number of occasions [rebuffed]O3 [[US]H4 [requests]O4 for [more oil exports]T4 ]T3. CRF+ILP However, [Chavez]H1 who [is known]O for [his ala Fidel Castro]H2 [left-leaning anti-American philosophy]O2 had on a number of occasions [rebuffed]O1 [US]H3 [requests]O3 for more oil exports. SLL+RLL However, Chavez who [is known]O for his ala Fidel Castro left-leaning anti-American [philosophy]O had on a number of occasions [rebuffed]O1 [US requests for more oil exports]T1. S2 : A short while ago, [our correspondent in Bethlehem]H1 [said]O1 that [Ra’fat al-Bajjali]T1 was martyred of wounds sustained in the explosion. CRF+ILP A short while ago, [our correspondent]H1 in Bethlehem [said]O1 that [Ra’fat al-Bajjali]T1 was martyred of wounds sustained in the explosion. SLL+RLL A short while ago, our correspondent in Bethlehem said that Ra’fat al-Bajjali was martyred of wounds sustained in the explosion. S3 : This is no criticism, and is widely known and appreciated. CRF+ILP This is no criticism, and is widely known and appreciated. SLL+RLL [This]T1 [is no criticism]O1, and is widely [known and appreciated]O. S4 : From the fact that mothers care for their young, we can not deduce that they ought to do so, Hume argued. CRF+ILP From the fact that [mothers]H1 [care]O1 for their young, we can not deduce that they ought to do so, [Hume]H2 [argued]O2. SLL+RLL From the fact that mothers care for their young, [we]H1 [can not deduce]O1 that [they]T1 ought to do so, [Hume]H2 [argued]O2. Table 4: Examples from the dataset with label annotations from CRF+ILP and SLL+RLL models for comparison. The first row for each example is the gold standard. opinion holders and opinion targets as compared to SLL+RLL model. There were also instances such as S3 and S4 in Table 4 for which the gold standard does not have an annotation but the SLL+RLL output looks reasonable with respect to our task. In S3, SLL+RLL identifies “is no criticism” as an opinion expression for the target “This”. However, it fails to identify the relation-link between “known and appreciated” and the target “This”. Similarly, SLL+RLL also identifies reasonable opinion entities in S4, whereas the ILP model erroneously annotates “mothers” as the opinion holder and “care” as the opinion expression. We handle the task of joint-extraction of opinion entities and opinion relations as a sequence labeling task in this paper and report the performance of the 1-best path at the time of Viterbi inference. However, there are approaches such as discriminative reranking (Collins and Koo, 2005) to rerank the output of an existing system that offer a means for further improving the performance of our SLL+RLL model. In particular, the oracle performance using the top-10 Viterbi paths from our SLL+RLL model has an F-score of 82.11 for opinion expressions, 76.77 for targets and 78.10 for holders. Similarly, IS-ABOUT relations have an F-score of 65.99 and IS-FROM relations, an Fscore of 70.80. These scores are on average 10 points better than the performance of the current SLL+RLL model, indicating that substantial gains might be attained via reranking. 8 Conclusion In this paper, we explored LSTM-based models for the joint extraction of opinion entities and relations. Experimentally, we found that adding sentence-level and relation-level dependencies on the output layer improves the performance on opinion entity extraction, obtaining results within 1-3% of the ILP-based joint model on opinion entities, within 3% for IS-FROM relation and comparable for IS-ABOUT relation. In future work, we plan to explore the effects of pre-training (Bengio et al., 2009) and scheduled sampling (Bengio et al., 2015) for training our LSTM network. We would also like to explore re-ranking methods for our problem. With respect to the fine-grained opinion mining task, a potential future direction to be able to model overlapping and embedded entities and relations and also to extend this model to handle cross-sentential relations. 927 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473. Yoshua Bengio, J´erˆome Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML ’09, pages 41– 48, New York, NY, USA. ACM. Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. CoRR, abs/1506.03099. Yoshua Bengio. 2009. Learning deep architectures for ai. Found. Trends Mach. Learn., 2(1):1–127, January. Eric Breck, Yejin Choi, and Claire Cardie. 2007. Identifying expressions of opinion in context. In Proceedings of the 20th International Joint Conference on Artifical Intelligence, IJCAI’07, pages 2683– 2688, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Yejin Choi, Claire Cardie, Ellen Riloff, and Siddharth Patwardhan. 2005. Identifying sources of opinions with conditional random fields and extraction patterns. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT ’05, pages 355–362, Stroudsburg, PA, USA. Association for Computational Linguistics. Yejin Choi, Eric Breck, and Claire Cardie. 2006. Joint extraction of entities and relations for opinion recognition. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, EMNLP ’06, pages 431–439, Stroudsburg, PA, USA. Association for Computational Linguistics. Michael Collins and Terry Koo. 2005. Discriminative reranking for natural language parsing. Comput. Linguist., 31(1):25–70, March. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493–2537, November. Alex Graves, Navdeep Jaitly, and Abdel-rahman Mohamed. 2013. Hybrid speech recognition with deep bidirectional LSTM. In 2013 IEEE Workshop on Automatic Speech Recognition and Understanding, Olomouc, Czech Republic, December 8-12, 2013, pages 273–278. James Hammerton. 2003. Named entity recognition with long short-term memory. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003 - Volume 4, CONLL ’03, pages 172–175, Stroudsburg, PA, USA. Association for Computational Linguistics. Michiel Hermans and Benjamin Schrauwen. 2013. Training and analysing deep recurrent neural networks. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States., pages 190–198. Salah El Hihi and Yoshua Bengio. 1996. Hierarchical recurrent neural networks for long-term dependencies. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735– 1780, November. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. CoRR, abs/1508.01991. Ozan Irsoy and Claire Cardie. 2013. Bidirectional recursive neural networks for token-level labeling with structure. arXiv preprint arXiv:1312.0493. Ozan ˙Irsoy and Claire Cardie. 2014. Opinion mining with deep recurrent neural networks. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 720–728. Soo-Min Kim and Eduard Hovy. 2006. Extracting opinions, opinion holders, and topics expressed in online news media text. In Proceedings of the Workshop on Sentiment and Subjectivity in Text, SST ’06, pages 1–8, Stroudsburg, PA, USA. Association for Computational Linguistics. Nozomi Kobayashi, Kentaro Inui, and Yuji Matsumoto. 2007. Extracting aspect-evaluation and aspect-of relations in opinion mining. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL. Pengfei Liu, Shafiq Joty, and Helen Meng. 2015. Finegrained opinion mining with recurrent neural networks and word embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1433–1443, Lisbon, Portugal, September. Association for Computational Linguistics. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 3111–3119. Curran Associates, Inc. Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using lstms on sequences and tree structures. CoRR, abs/1601.00770. 928 J¨urgen Schmidhuber. 1992. Learning complex, extended sequences using the principle of history compression. Neural Comput., 4(2):234–242, March. M. Schuster and K.K. Paliwal. 1997. Bidirectional recurrent neural networks. Trans. Sig. Proc., 45(11):2673–2681, November. Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL ’12, pages 1201–1211, Stroudsburg, PA, USA. Association for Computational Linguistics. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104–3112. Curran Associates, Inc. Janyce Wiebe and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. language resources and evaluation. In Language Resources and Evaluation (formerly Computers and the Humanities, page 2005. Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phraselevel sentiment analysis. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT ’05, pages 347–354, Stroudsburg, PA, USA. Association for Computational Linguistics. Theresa Ann Wilson. 2008. Fine-grained Subjectivity and Sentiment Analysis: Recognizing the intensity, polarity, and attitudes of private states. Ph.D. thesis, The University of Pittsburgh, June. Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, and Zhi Jin. 2015. Classifying relations via long short term memory networks along shortest dependency paths. In In Proceedings of Conference on Empirical Methods in Natural Language Processing. Bishan Yang and Claire Cardie. 2012. Extracting opinion expressions with semi-markov conditional random fields. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL ’12, pages 1335– 1345, Stroudsburg, PA, USA. Association for Computational Linguistics. Bishan Yang and Claire Cardie. 2013. Joint inference for fine-grained opinion extraction. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1640–1649, Sofia, Bulgaria, August. Association for Computational Linguistics. Matthew D. Zeiler. 2012. ADADELTA: an adaptive learning rate method. CoRR, abs/1212.5701. 929
2016
87
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 930–940, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Transition-Based Left-Corner Parsing for Identifying PTB-Style Nonlocal Dependencies Yoshihide Kato1 and Shigeki Matsubara2 1Information & Communications, Nagoya University 2Graduate School of Information Science, Nagoya University Furo-cho, Chikusa-ku, Nagoya, 464-8601 Japan [email protected] Abstract This paper proposes a left-corner parser which can identify nonlocal dependencies. Our parser integrates nonlocal dependency identification into a transition-based system. We use a structured perceptron which enables our parser to utilize global features captured by nonlocal dependencies. An experimental result demonstrates that our parser achieves a good balance between constituent parsing and nonlocal dependency identification. 1 Introduction Many constituent parsers based on the Penn Treebank (Marcus et al., 1993) are available, but most of them do not deal with nonlocal dependencies. Nonlocal dependencies represent syntactic phenomenon such as wh-movement, A-movement in passives, topicalization, raising, control, right node raising and so on. Nonlocal dependencies play an important role on semantic interpretation. In the Penn Treebank, a nonlocal dependency is represented as a pair of an empty element and a filler. Several methods of identifying nonlocal dependencies have been proposed so far. These methods can be divided into three approaches: pre-processing approach (Dienes and Dubey, 2003b), in-processing approach (Dienes and Dubey, 2003a; Schmid, 2006; Cai et al., 2011; Kato and Matsubara, 2015) and post-processing approach (Johnson, 2002; Levy and Manning, 2004; Campbell, 2004; Xue and Yang, 2013; Xiang et al., 2013; Takeno et al., 2015).1 In preprocessing approach, a tagger called “trace tagger” detects empty elements. The trace tagger uses 1The methods of (Cai et al., 2011; Xue and Yang, 2013; Xiang et al., 2013; Takeno et al., 2015) only detect empty elements. only surface word information. In-processing approach integrates nonlocal dependency identification into a parser. The parser uses a probabilistic context-free grammar to rank candidate parse trees. Post-processing approach recovers nonlocal dependencies from a parser output which does not include nonlocal dependencies. The parsing models of the previous methods cannot use global features captured by nonlocal dependencies. Pre- or in-processing approach uses a probabilistic context-free grammar, which makes it difficult to use global features. Postprocessing approach performs constituent parsing and nonlocal dependency identification separately. This means that the constituent parser cannot use any kind of information about nonlocal dependencies. This paper proposes a parser which integrates nonlocal dependency identification into constituent parsing. Our method adopts an inprocessing approach, but does not use a probabilistic context-free grammar. Our parser is based on a transition system with structured perceptron (Collins, 2002), which can easily introduce global features to its parsing model. We adopt a left-corner strategy in order to use the syntactic relation c-command, which plays an important role on nonlocal dependency identification. Previous work on transition-based constituent parsing adopts a shift-reduce strategy with a tree binarization (Sagae and Lavie, 2005; Sagae and Lavie, 2006; Zhang and Clark, 2009; Zhu et al., 2013; Wang and Xue, 2014; Mi and Huang, 2015; Thang et al., 2015; Watanabe and Sumita, 2015), or convert constituent trees to “spinal trees”, which are similar to dependency trees (Ballesteros and Carreras, 2015). These conversions make it difficult for their parsers to capture c-command relations in the parsing process. On the other hand, our parser does not require such kind of conversion. 930 NP NN group NP SBAR WHNP-1 WDT that S NP-SBJ-2 -NONE*T*-1 VP VBD S NP-SBJ -NONE*-2 VP managed to traduce its own charter ... DT the NNP U.N. Figure 1: A parse tree in the Penn Treebank. Our contribution can be summarized as follows: 1. We introduce empty element detection into transition-based left-corner constituent parsing. 2. We extend c-command relation to deal with nodes in parse tree stack in the transition system, and develop heuristic rules which coindex empty elements with their fillers on the basis of the extended version of c-command. 3. We introduce new features about nonlocal dependency to our parsing model. This paper is organized as follows: Section 2 explains how to represent nonlocal dependencies in the Penn Treebank. Section 3 describes our transition-based left-corner parser. Section 4 introduces nonlocal dependency identification into our parser. Section 5 describes structured perceptron and features. Section 6 reports an experimental result, which demonstrated that our parser achieved a good balance between constituent parsing and nonlocal dependency identification. Section 7 concludes this paper. 2 Nonlocal Dependency This section describes nonlocal dependencies in the Penn Treebank (Marcus et al., 1993). A nonlocal dependency is represented as a pair of an empty element and a filler. Figure 1 shows an example of (partial) parse tree in the Penn Treebank. The parse tree includes several nonlocal dependencies. The nodes labeled with -NONE- are empty elements. The terminal symbols such as ∗and ∗T∗represent the type of nonlocal dependency: ∗ represents an unexpressed subject of to-infinitive. ∗T∗represents a trace of wh-movement. When a terminal symbol of empty element is indexed, its filler exists in the parse tree. The filler has the same number. For example, ∗T∗-1 means that the node WHNP-1 is the corresponding filler. Table 1 gives a brief description of empty elements quoted from the annotation guideline (Bies et al., 1995). For more details, see the guideline. 3 Transition-Based Left-Corner Parsing This section describes our transition-based leftcorner parser. As with previous work (Sagae and Lavie, 2005; Sagae and Lavie, 2006; Zhang and Clark, 2009; Zhu et al., 2013; Wang and Xue, 2014; Mi and Huang, 2015; Thang et al., 2015; Watanabe and Sumita, 2015), our transition-based parsing system consists of a set of parser states and a finite set of transition actions, each of which maps a state into a new one. A parser state consists of a stack of parse tree nodes and a buffer of input words. A state is represented as a tuple (σ, i), where σ is the stack and i is the next input word position in the buffer. The initial state is (⟨⟩, 0). The final states are in the form of (⟨[· · ·]TOP⟩, n), where TOP is a special symbol for the root of the parse tree and n is the length of the input sentence. The transition actions for our parser are as follows: • SHIFT(X): pop up the first word from the buffer, assign a POS tag X to the word and push it onto the stack. The SHIFT action assigns a POS tag to the shifted word to perform POS tagging and constituent parsing simultaneously. This is in the same way as Wang and Xue (2014). • LEFTCORNER-{H/∅}(X): pop up the first node from the stack, attach a new node labeled with X to the node as the parent and push it back onto the stack. H and ∅indicate whether or not the popped node is the head child of the new node. • ATTACH-{H/∅}: pop up the top two nodes from the stack, attach the first one to the second one as the rightmost child and push it back onto the stack. H and ∅indicate whether or not the first node is the head child of the second one. We introduce new actions LEFTCORNER and ATTACH. ATTACH action is similar to REDUCE action standardly used in the previous transitionbased parsers. However, there is an important 931 type description n-posi ∗ arbitrary PRO, controlled PRO and trace of A-movement L, R, − ∗EXP∗ expletive (extraposition) R ∗ICH∗ interpret constituent here (discontinuous dependency) L, R ∗RNR∗ right node raising R ∗T∗ trace of A′-movement A, L 0 null complementizer − ∗U∗ unit − ∗?∗ placeholder for ellipsed material − ∗NOT∗ anti-placeholder in template gapping − Table 1: Empty elements in the Penn Treebank. SHIFT(X) (⟨sm, . . . , s0⟩, i) ⇒(⟨sm, . . . , s0, [wi]X⟩, i + 1) LEFTCORNER-{H/∅}(X) (⟨sm, . . . , s1, s0⟩, i) ⇒(⟨sm, . . . , s1, [s0]X⟩, i) ATTACH-{H/∅} (⟨sm, . . . , s2, [σ1]X, s0⟩, i) ⇒(⟨sm, . . . , s2, [σ1s0]X⟩, i) Figure 2: Transition actions for left-corner parsing. difference between ATTACH and REDUCE. The REDUCE action cannot deal with any node with more than two children. For this reason, the previous work converts parse trees into binarized ones. The conversion makes it difficult to capture the hierarchical structure of the parse trees. On the other hand, ATTACH action can handle more than two children. Therefore, our parser does not require such kind of tree binarization. These transition actions are similar to the ones described in (Henderson, 2003), although his parser uses right-binarized trees and does not identify headchildren. Figure 2 summarizes the transition actions for our parser. To guarantee that every non-terminal node has exactly one head child, our parser uses the following constraints: • LEFTCORNER and ATTACH are not allowed when s0 has no head child. • ATTACH-H is not allowed when s1 has a head child. Table 2 shows the first several transition actions which derive the parse tree shown in Figure 1. Head children are indicated by the superscript ∗. Previous transition-based constituent parsing does not handle nonlocal dependencies. One exception is the work of Maier (2015), who proposes shift-reduce constituent parsing with swap action. The parser can handle nonlocal dependencies represented as discontinuous constituents. In this framework, discontinuities are directly annotated by allowing crossing branches. Since the annotation style is quite different from the PTB annotation, the parser is not suitable for identifying the PTB style nonlocal dependencies.2 4 Nonlocal Dependency Identification Nonlocal dependency identification consists of two subtasks: • empty element detection. • empty element resolution, which coindexes empty elements with their fillers. Our parser can insert empty elements at an arbitrary position to realize empty element detection. This is in a similar manner as the in-processing approach. Our method coindexes empty elements with their fillers using simple heuristic rules, which are developed for our transition system. 4.1 Empty Element Detection We introduce the following action to deal with empty elements: E-SHIFT(E, t) : (⟨sm, . . . , s0⟩, i) ⇒(⟨sm, . . . , s0, [t]E⟩, i) This action simply inserts an empty element at an arbitrary position and pops up no element from the buffer (see the transition from #11 to #12 shown in Table 2 as an example). 4.2 Annotations For empty element resolution, we augment the Penn Treebank. For nonlocal dependency types 2In (Evang and Kallmeyer, 2011), the PTB-style annotation of types ∗EXP, ∗ICH∗, ∗RNR∗and ∗T∗is transformed into an annotation with crossing branches. 932 action # state (initial state) 1 (⟨⟩, 0) SHIFT(DT) 2 (⟨[the]DT⟩, 1) LEFTCORNER-∅(NP) 3 (⟨[[the]DT]NP⟩, 1) SHIFT(NNP) 4 (⟨[[the]DT]NP, [U.N.]NNP⟩, 2) ATTACH-∅ 5 (⟨[[the]DT[U.N.]NNP]NP⟩, 2) SHIFT(NN) 6 (⟨[[the]DT[U.N.]NNP]NP, [group]NN⟩, 3) ATTACH-H 7 (⟨[[the]DT[U.N.]NNP[group]NN∗]NP⟩, 3) LEFTCORNER-H(NP) 8 (⟨[[[the]DT[U.N.]NNP[group]NN∗]NP∗]NP⟩, 3) SHIFT(WDT) 9 (⟨[[[the]DT[U.N.]NNP[group]NN∗]NP∗]NP, [that]WDT⟩, 4) LEFTCORNER-H(WHNP-∗T∗-NP-L) 10 (⟨[[[the]DT[U.N.]NNP[group]NN∗]NP∗]NP, [[that]WDT∗]WHNP-∗T∗-NP-L⟩, 4) LEFTCORNER-H(SBAR) 11 (⟨[[[the]DT[U.N.]NNP[group]NN∗]NP∗]NP, [[[that]WDT∗]WHNP-∗T∗-NP-L∗]SBAR⟩, 4) E-SHIFT(-NONE-NP-L, ∗T∗) 12 (⟨[[[the]DT[U.N.]NNP[group]NN∗]NP∗]NP, [[[that]WDT∗]WHNP-∗T∗-NP-L∗]SBAR, [∗T∗]-NONE-NP-L⟩, 4) Table 2: An example of transition action sequence. ∗EXP∗, ∗ICH∗, ∗RNR∗and ∗T∗, we assign the following information to each filler and each empty element: • The nonlocal dependency type (only for filler). • The nonlocal dependency category, which is defined as the category of the parent of the empty element. • The relative position of the filler, which take a value from {A, L, R}. “A” means that the filler is an ancestor of the empty element. “L” (“R”) means that the filler occurs to the left (right) of the empty element. Table 1 summarizes which value each empty element can take. The information is utilized for coindexing empty elements with fillers. Below, we write n-type(x), n-cat(x) and n-posi(x) for the information of a node x, respectively. If an empty element of type ∗is indexed, we annotate the empty element in the same way.3 Furthermore, we assign a tag OBJCTRL to every empty element if its coindexed constituent does not have the function tag SBJ.4 This enables our parser to distinguish between subject control and object control. Figure 3 shows the augmented version of the parse tree of Figure 1. 4.3 Empty Element Resolution Nonlocal dependency annotation in the Penn Treebank is based on Chomsky’s GB-theory (Chomsky, 1981). This means that there exist ccommand relations between empty elements and 3We omit its nonlocal dependency category, since it is always NP. 4In the Penn Treebank, every subject has the tag SBJ. NP NN group NP SBAR WHNP-*T*-NP-L WDT that S NP-SBJ -NONE-NP-L *T* VP VBD S NP-SBJ -NONE-L * managed DT the NNP U.N. VP to traduce its own charter ... Figure 3: An augmented parse tree. fillers in many cases. For example, all the empty elements in Figure 1 are c-commanded by their fillers. Our method coindexes empty elements with their fillers by simple heuristic rules based on the c-command relation. 4.3.1 C-command Relation Here, we define c-command relation in a parse tree as follows: • A node x c-commands a node y if and only if there exists some node z such that z is a sibling of x(x ̸= z) and y is a descendant of z. It is difficult for previous transition-based shift-reduce constituent parsers to recognize c-command relations between nodes, since parse trees are binarized. On the other hand, our left-corner parser needs not to binarize parse trees and can easily recognize c-command relations. Furthermore, we extend c-command relation to handle nodes in a stack of our transition system. For two nodes x and y in a stack, the following statement necessarily holds: 933 NP2 NP5 SBAR8 WHNP-*T*-NP-L7 WDT6 that S11 NP-SBJ10 -NONE-NP-L9 *T* VP13 VBD12 -NONE-L14 * managed NNS3 U.N. DT1 the s0 NN4 group s1 s2 s3 e Figure 4: An example of resolution of [∗]-NONE-L. • Let S = (⟨sm, . . . , s0⟩, i) be a parser state. Let y be a descendant of sj and x be a child of some node sk(j < k ≤m), respectively. Then, x c-commands y in any final state derived from the state S. Below, we say that x c-commands y, even when the nodes x and y satisfy the above statement. As an example, let us consider the state shown in Figure 4. The subscripts of nodes indicate the order in which the nodes are instantiated. The nodes in dotted box c-command the shifted node -NONE-L14 in terms of the above statement. In the parse tree shown in Figure 3, which is derived from this state, these nodes c-commands -NONE-L14 by the original definition. 4.3.2 Resolution Rules Our parser coindexes an empty element with its filler, when E-SHIFT or ATTACH is executed. ESHIFT action coindexes the shifted empty element e such that n-posi(e) = L with its filler. ATTACH action coindexes the attached filler s0 such that n-posi(s0) = R with its corresponding empty element. Resolution rules consist of three parts: PRECONDITION, CONSTRAINT and SELECT. Empty element resolution rule is applied to a state when the state satisfies PRECONDITION of the rule. CONSTRAINT represents the conditions which coindexed element must satisfy. SELECT can take two values ALL and RIGHTMOST. When there exist several elements satisfying the CONSTRAINT, SELECT determines how to select coindexed elements. ALL means that all the elements satisfying the CONSTRAINT are coindexed. RIGHTMOST selects the rightmost element satisfying the CONSTRAINT. The most frequent type of nonlocal dependency in the Penn Treebank is ∗. Figure 5 shows the resolution rules for type ∗. Here, ch(s) designates the set of the children of s. sbj(x) means that x has a function tag SBJ. par(x) designates the parent of x. cat(x) represents the constituent catRule: ∗-L PRECONDITION ACTION=E-SHIFT(-NONE-L, ∗) CONSTRAINT for coindexed element x x ∈∪m j=0 ch(sj) # x c-commands e sbj(x) SELECT: RIGHTMOST Rule: ∗-L-OBJCTRL PRECONDITION ACTION=E-SHIFT(-NONE-L-OBJCTRL, ∗) CONSTRAINT for coindexed element x x ∈∪m j=0 ch(sj) # x c-commands e cat(x) = NP ∨cat(x) = PP cat(par(x)) = VP SELECT: RIGHTMOST Rule: ∗-R PRECONDITION ACTION=ATTACH sbj(s0) CONSTRAINT for coindexed element x x ∈des(s1) # s0 c-commands x x = [∗]-NONE-R free(x, ⟨sm, . . . , s0⟩) SELECT: ALL Figure 5: Resolution rules for type ∗. egory of x. des(s) designates the set of the proper descendants of s. free(x, σ) means that x is not coindexed with a node included in σ. The first rule ∗-L is applied to a state when E-SHIFT action inserts an empty element e = [∗]-NONE-L. This rule seeks a subject which ccommands the shifted empty element. The first constraint means that the node x c-commands the empty element e, since the resulting state of ESHIFT action is (⟨sm, . . . , s0, e⟩, i), and x and e satisfy the statement in section 4.3.1. For example, the node NP-SBJ10 shown in Figure 4 satisfies these constraints (the dotted box represents the first constraint). Therefore, our parser coindexes NP-SBJ10 with -NONE-L14. The second rule ∗-L-OBJCTRL seeks an object instead of a subject. The second and third constraints identify whether or not x is an argument. If x is a prepositional phrase, our parser coindexes e with x’s child noun phrase instead of x itself, in order to keep the PTB-style annotation. The third rule ∗-R is for null subject of participial clause. Figure 6 shows an example of applying the rule ∗-R to a state. This rule is applied to a state when the transition action is ATTACH and s0 is a subject. By definition, the first constraint means that s0 c-commands x. The second most frequent type is ∗T∗. Figure 7 shows the rule for ∗T∗. This rule is ap934 SBAR7 WHNP-*T*-NP-L6 WP5 which S11 NP-SBJ9 VP13 VBZ12 NP15 -NONE-NP-L14 *T* takes NNS10 bank DT8 the VP-CRD16 CC17 or VP19 VBZ18 plans -NONE-NP-L20 *T* remove coordinate structure SBAR7 WHNP-*T*-NP-L6 WP5 which S11 NP-SBJ9 NNS10 bank DT8 the VP19 VBZ18 plans -NONE-NP-L20 *T* s0 s1 s2 s3 e Figure 9: An example of resolution of [∗T∗]-NONE-NP-L in the case where the stack has coordinate structure. S3 NP-SBJ2 -NONE-R1 * VP4 Considered as a whole S5 ,6 , NP-SBJ7 s0 s1 des(s1) S3 NP-SBJ2 -NONE-R1 * VP4 S5 ,6 , NP-SBJ7 Attach des(s1) Considered as a whole s1 Figure 6: An example of resolution of [∗]-NONE-R. Rule: ∗T∗-L PRECONDITION ACTION=E-SHIFT(-NONE-L, ∗T∗) CONSTRAINT for coindexed element x # x c-commands e x ∈∪ s∈removeCRD(⟨sm,...,s0⟩) ch(s) match(x, e) free(x, removeCRD(⟨sm, . . . , s0⟩)) SELECT: RIGHTMOST Figure 7: Resolution rule for type ∗T∗. plied to a state when E-SHIFT action inserts an empty element of type ∗T∗. Here, match(x, y) checks the consistency between x and y, that is, match(x, y) holds if and only if n-type(x) = n-type(y), n-cat(x) = n-cat(y), n-posi(x) = n-posi(y), cat(x) ̸= -NONE- and cat(y) = -NONE-. removeCRD(⟨sm, . . . , s0⟩) is a stack which is obtained by removing sj(0 ≤j ≤m) which is annotated with a tag CRD.5 The tag CRD 5We assign a tag CRD to a node, when it matches the pattern [· · · [· · ·]X · · · [· · ·](CC|CONJP|,|:) · · · [· · ·]X · · ·]X. SBAR7 WHNP-*T*-NP-L6 WP5 which S11 NP-SBJ9 VP13 VBZ12 -NONE-NP-L14 *T* takes NNS10 bank DT8 the s0 s1 s2 e Figure 8: An example of resolution of [∗T∗]-NONE-NP-L. means that the node is coordinate structure. In general, each filler of type ∗T∗is coindexed with only one empty element. However, a filler of type ∗T∗can be coindexed with several empty elements if the empty elements are included in coordinate structure. This is the reason why our parser uses removeCRD. Figure 8 and 9 give examples of resolution for type ∗T∗. The empty elements [∗T∗]-NONE-A are handled by an exceptional process. When ATTACH action is applied to a state (⟨sm, . . . , s0⟩, i) such that cat(s0) = PRN, the parser coindexes the empty element x = [∗T∗]-NONE-A included in s0 with s1. More precisely, the coindexation is executed if the following conditions hold: • x ∈des(s0) • match(s1, x) • free(x, ⟨sm, . . . , s0⟩) For the other types of nonlocal dependencies, that is, ∗EXP∗, ∗ICH∗and ∗RNR∗, we use a simi935 Rule: ∗EXP∗-R PRECONDITION ACTION=ATTACH s0 is a filler of type ∗EXP∗ CONSTRAINT for coindexed element x x ∈∪m j=1 ch(sj) # x c-commands s0 x = [[it]PRP]NP SELECT: RIGHTMOST Rule: ∗ICH∗-L PRECONDITION ACTION=E-SHIFT(-NONE-L, ∗ICH∗) CONSTRAINT for coindexed element x match(x, e) free(x, ⟨sm, . . . , s0⟩) SELECT: RIGHTMOST Rule: ∗ICH∗-R PRECONDITION ACTION=ATTACH s0 is a filler of type ∗ICH∗ CONSTRAINT for coindexed element x x ∈∪m j=1 des(sj) match(s0, x) free(x, ⟨sm, . . . , s0⟩) SELECT: RIGHTMOST Rule: ∗RNR∗-R PRECONDITION ACTION=ATTACH s0 is a filler of type ∗RNR∗ CONSTRAINT for coindexed element x x ∈des(s1) # s0 c-commands x match(s0, x) free(x, ⟨sm, . . . , s0⟩) SELECT: ALL Figure 10: Resolution rule for ∗EXP∗, ∗ICH∗and ∗RNR∗. lar idea to design the resolution rules. Figure 10 shows the resolution rules. These heuristic resolution rules are similar to the previous work (Campbell, 2004; Kato and Matsubara, 2015), which also utilizes c-command relation. An important difference is that we design heuristic rules not for fully-connected parse tree but for stack of parse trees derived by left-corner parsing. That is, the extend version of c-command relation plays an important role on our heuristic rules. 5 Parsing Strategy We use a beam-search decoding with the structured perceptron (Collins, 2002). A transition action a for a state S has a score defined as follows: score(S, a) = w · f(S, a) where f(S, a) is the feature vector for the stateaction pair (S, a), and w is a weight vector. The input: sentence w1 · · · wn, beam size k H ←{S0} # S0 is the initial state for w1 · · · w0 repeat N times do C ←{} for each S ∈H do for each possible action a do S′ ←apply a to S push S′ to C H ←k best states of C return best final state in C Figure 11: Beam-search parsing. score of a state S′ which is obtained by applying an action a to a state S is defined as follows: score(S′) = score(S) + score(S, a) For the initial state S0, score(S0) = 0. We learn the weight vector w by max-violation method (Huang et al., 2012) and average the weight vector to avoid overfitting the training data (Collins, 2002). In our method, action sequences for the same sentence have different number of actions because of E-SHIFT action. To absorb the difference, we use an IDLE action, which are proposed in (Zhu et al., 2013): IDLE : (⟨[· · ·]TOP⟩, n) ⇒(⟨[· · ·]TOP⟩, n) Figure 11 shows details of our beam-search parsing. The algorithm is the same as the previous transition-based parsing with structured perceptron. One point to be noted here is how to determine the maximum length of action sequence (= N) which the parser allows. Since it is impossible to know in advance how many empty elements a parse tree has, we need to set this parameter as a sufficiently larger value. 5.1 Features A feature is defined as the concatenation of a transition action and a state feature which is extracted using a feature template. Table 3 shows our baseline feature templates. The feature templates are similar to the ones of (Zhang and Clark, 2009), which are standardly used as baseline templates for transition-based constituent parsing. Here, bi and si stand for the i-th element of buffer and stack, respectively. x.c represents x’s augmented label. x.l, x.r and x.h represent x’s leftmost, rightmost and head children. x.t and x.w represents x’s head POS tag and head word, respectively. x.i indicates whether or not the initial letter of x is capitalized. When a non-terminal node 936 type feature templates unigram s0.c ◦s0.t, s0.c ◦s0.w, s0.l.c ◦s0.l.w, s0.r.c ◦s0.r.w, s0.h.c ◦s0.h.w, s1.c ◦s1.t, s1.c ◦s1.w, s1.l.c ◦s1.l.w, s1.r.c ◦s1.r.w, s1.h.c ◦s1.h.w, s2.c ◦s2.t, s2.c ◦s2.w, s3.c ◦s3.t, s3.c ◦s3.w, b0.i, b0.w, b1.i, b1.w, b2.i, b2.w, b3.i, b3.w bigram s1.w ◦s0.w, s1.c ◦s0.w, s1.w ◦s0.c, s1.w ◦s0.w, s0.c ◦b0.i, s0.c ◦b0.w, s0.w ◦b0.i, s0.w ◦b0.w, s1.c ◦b0.i, s1.c ◦b0.w, s1.w ◦b0.i, s1.w ◦b0.w, b0.i ◦b1.i, b0.w ◦b1.i, b0.i ◦b1.w, b0.w ◦b1.w trigram s2.c ◦s1.c ◦s0.c, s2.c ◦s1.c ◦s0.w, s2.c ◦s1.w ◦s0.c, s2.w ◦s1.c ◦s0.c, s1.c ◦s0.c ◦b0.i, s1.w ◦s0.c ◦b0.i, s1.c ◦s0.w ◦b0.i, s1.w ◦s0.w ◦b0.i Table 3: Baseline feature templates. feature templates s0.n0.c, s0.n1.c, s1.n0.c, s1.n1.c, rest2.n0.c, rest2.n1.c Table 4: Nonlocal dependency feature templates. does not yet have a head child, the head-based atomic features are set to a special symbol nil. To extract the features, we need to identify head children in parse trees. We use the head rules described in (Surdeanu et al., 2008). In addition to these features, we introduce a new feature which is related to empty element resolution. When a transition action invokes empty element resolution described in section 4.3.2, we use as a feature, whether or not the procedure coindexes empty elements with a filler. Such a feature is difficult for a PCFG to capture. This feature enables our parsing model to learn the resolution rule preferences implicitly, while the training process is performed only with oracle action sequences. In addition, we use features about free empty elements and fillers. Table 4 summarizes such feature templates. Here, x.ni stands for the i-th rightmost free element included in x, and resti stands for the stack ⟨sm, . . . , si⟩. 6 Experiment We conducted an experiment to evaluate the performance of our parser using the Penn Treebank. We used a standard setting, that is, section 02-21 is for the training data, section 22 is for the development data and section 23 is for the test data. In training, we set the beam size k to 16 to achieve a good efficiency. We determined the optimal iteration number of perceptron training, and the beam size (k was set to 16, 32 and 64) for decoding on the development data. The maximum type system F1 TS Zhu et al. (2013) (beam 16) 90.4 Zhu et al. (2013)∗(beam 16) 91.3 Mi and Huang (2015) (beam 32) 90.3 Mi and Huang (2015) (beam 32,DP) 90.8 Thang et al. (2015) (A∗) 91.1 Ballesteros and Carreras (2015) (beam 64) 89.0 NDI Charniak (2000)† (post-processing) 89.6 Dienes and Dubey (2003a) (in-processing) 86.4 Schmid (2006) (in-processing) 86.6 Kato and Matsubara (2015) (in-processing) 87.7 ours CF (beam 64) 88.9 baseline features (beam 64) 89.0 baseline + ND features (beam 64) 88.9 TS: transition-based parsers with structured perceptron. NDI: parsers with nonlocal dependency identification. DP: Dynamic Programming. Zhu et al. (2013)∗uses additional language resources. †Johnson (2002) and Campbell (2004) used the output of Charniak’s parser. Table 5: Comparison for constituent parsing. length of action sequences (= N) was set to 7n, where n is the length of input sentence. This maximum length was determined to deal with the sentences in the training data. Table 5 presents the constituent parsing performances of our system and previous systems. We used the labeled bracketing metric PARSEVAL (Black et al., 1991). Here, “CF” is the parser which was learned from the training data where nonlocal dependencies are removed. This result demonstrates that our nonlocal dependency identification does not have a bad influence on constituent parsing. From the viewpoint of transitionbased constituent parsing, our left-corner parser is somewhat inferior to other perceptron-based shiftreduce parsers. On the other hand, our parser outperforms the parsers which identify nonlocal dependency based on in-processing approach. We use the metric proposed by Johnson (2002) to evaluate the accuracy of nonlocal dependency identification. Johnson’s metric represents a nonlocal dependency as a tuple which consists of the type of the empty element, the category of the empty element, the position of the empty element, the category of the filler and the position of the filler. For example, the nonlocal dependency of the type ∗T∗in Figure 1 is represented as (∗T∗, NP, [4, 4], WHNP, [3, 4]). The precision and the recall are measured using these tuples. For more details, see (Johnson, 2002). Table 6 shows the nonlocal dependency identification performances of our method and previous methods. Previous in-processing approach 937 Unindexed empty elements are excluded rec. pre. F1 rec. pre. F1 Johnson (2002) (post-processing) 63 73 68 − − − D & D (2003a) (pre-processing) 66.0 80.5 72.6 − − − D & D (2003b) (in-processing) 68.7 81.5 74.6 − − − Campbell (2004) (post-processing) 75.1 78.3 76.7 − − − Schmid (2006) (in-processing) − − − 73.5 81.7 77.4 K&M (2015) (in-processing) 75.6 80.6 78.0 73.6 80.3 76.8 baseline features 70.4 79.7 74.8 65.4 81.1 72.4 + ND features 75.5 81.4 78.4 73.8 79.8 76.7 Table 6: Comparison for nonlocal dependency identification. achieved the state-of-the-art performance of nonlocal dependency identification, while it was inferior in terms of constituent parsing accuracy. Our nonlocal dependency identification is competitive with previous in-processing approach, and its accuracy of constituent parsing is higher than previous in-processing approach. As a whole, our parser achieves a good balance between constituent parsing and nonlocal dependency identification. Table 7 summarizes the accuracy of nonlocal dependency identification for each type of nonlocal dependency. 7 Conclusion This paper proposed a transition-based parser which identifies nonlocal dependencies. Our parser achieves a good balance between constituent parsing and nonlocal dependency identification. In the experiment reported in this paper, we used simple features which are captured by nonlocal dependencies. In future work, we will develop lexical features which are captured by nonlocal dependencies. Acknowledgements This research was partially supported by the Grant-in-Aid for Scientific Research (B) (No.26280082) of JSPS. References Miguel Ballesteros and Xavier Carreras. 2015. Transition-based spinal parsing. In Proceedings of the Nineteenth Conference on Computational Natural Language Learning, pages 289–299, Beijing, China, July. Ann Bies, Mark Ferguson, Karen Katz, and Robert MacIntyre. 1995. Bracketing Guidelines for Treebank II Style Penn Treebank Project. University of Pennsylvania. (F, E, T) freq. pre. rec. F1 (NP, NP, ∗) 1146 76.7 75.4 76.1 (−, -NONE-, 0) 545 92.3 83.7 87.8 (WHNP, NP, ∗T∗) 507 88.0 84.0 86.0 (−, NP, ∗) 477 69.0 71.7 70.3 (−, -NONE-, ∗U∗) 388 98.4 93.6 95.9 (S, S, ∗T∗) 277 83.6 80.9 82.2 (WHADVP, ADVP, ∗T∗) 164 82.1 70.1 75.7 (−, WHNP, 0) 107 73.3 51.4 60.4 (−, WHADVP, 0) 36 80.8 58.3 67.7 (PP, PP, ∗ICH∗) 29 20.0 3.5 5.9 (WHPP, PP, ∗T∗) 22 84.2 72.7 78.1 (SBAR, SBAR, ∗EXP∗) 16 71.4 31.3 43.5 (S, S, ∗ICH∗) 15 36.4 26.7 30.8 (S, S, ∗EXP∗) 14 50.0 42.9 46.2 (SBAR, SBAR, ∗ICH∗) 12 0.0 0.0 0.0 (−, NP, ∗?∗) 11 0.0 0.0 0.0 (−, S, ∗?∗) 9 100.0 11.1 20.0 (−, VP, ∗?∗) 8 45.5 62.5 52.6 (VP, VP, ∗T∗) 8 40.0 25.0 30.8 (ADVP, ADVP, ∗T∗) 7 80.0 57.1 66.7 (PP, PP, ∗T∗) 7 80.0 57.1 66.7 (−, -NONE-, ∗?∗) 7 0.0 0.0 0.0 (ADJP, ADJP, ∗T∗) 6 66.7 33.3 44.4 (ADVP, ADVP, ∗ICH∗) 6 0.0 0.0 0.0 (NP, NP, ∗ICH∗) 6 0.0 0.0 0.0 (VP, VP, ∗ICH∗) 6 0.0 0.0 0.0 Table 7: Accuracy of nonlocal dependency identification for all types of nonlocal dependency that occurred more than 5 times in section 23 of the Penn Treebank. F, E and T give the category of its filler, the category of its empty element and the type of nonlocal dependency, respectively. E. Black, S. Abney, D. Flickenger, C. Gdaniec, R. Grishman, P. Harrison, D. Hindle, R. Ingria, F. Jelinek, J. Klavans, M. Liberman, M. Marcus, S. Roukos, B. Santorini, and T. Strzalkowski. 1991. A procedure for quantitatively comparing the syntactic coverage of English grammars. In Proceedings of the 4th DARPA Speech and Natural Language Workshop, pages 306–311. Shu Cai, David Chiang, and Yoav Goldberg. 2011. Language-independent parsing with empty elements. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 212– 938 216, Portland, Oregon, USA, June. Richard Campbell. 2004. Using linguistic principles to recover empty categories. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL’04), Main Volume, pages 645–652, Barcelona, Spain, July. Eugene Charniak. 2000. A maximum-entropyinspired parser. In Proceedings of the 1st North American Chapter of the Association for Computational Linguistics, pages 132–139, April. Noam Chomsky. 1981. Lectures on government and binding: The Pisa lectures. Walter de Gruyter. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing, pages 1–8, July. P´eter Dienes and Amit Dubey. 2003a. Antecedent recovery: Experiments with a trace tagger. In Michael Collins and Mark Steedman, editors, Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, pages 33–40, July. P´eter Dienes and Amit Dubey. 2003b. Deep syntactic processing by combining shallow methods. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 431–438, Sapporo, Japan, July. Kilian Evang and Laura Kallmeyer. 2011. Plcfrs parsing of english discontinuous constituents. In Proceedings of the 12th International Conference on Parsing Technologies, pages 104–116, Dublin, Ireland, October. James Henderson. 2003. Inducing history representations for broad coverage statistical parsing. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 24–31, Edmonton, Canada. Liang Huang, Suphan Fayong, and Yang Guo. 2012. Structured perceptron with inexact search. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 142–151, Montr´eal, Canada, June. Mark Johnson. 2002. A simple pattern-matching algorithm for recovering empty nodes and their antecedents. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 136–143, Philadelphia, Pennsylvania, USA, July. Yoshihide Kato and Shigeki Matsubara. 2015. Identifying nonlocal dependencies in incremental parsing. IEICE Transactions on Information and Systems, E98-D(4):994–998. Roger Levy and Christopher Manning. 2004. Deep dependencies from context-free statistical parsers: Correcting the surface dependency approximation. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL’04), Main Volume, pages 327–334, Barcelona, Spain, July. Wolfgang Maier. 2015. Discontinuous incremental shift-reduce parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1202–1212, Beijing, China, July. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: the Penn Treebank. Computational Linguistics, 19(2):310–330. Haitao Mi and Liang Huang. 2015. Shift-reduce constituency parsing with dynamic programming and pos tag lattice. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1030–1035, Denver, Colorado, May–June. Kenji Sagae and Alon Lavie. 2005. A classifier-based parser with linear run-time complexity. In Proceedings of the Ninth International Workshop on Parsing Technology, pages 125–132, Vancouver, British Columbia, October. Kenji Sagae and Alon Lavie. 2006. A best-first probabilistic shift-reduce parser. In Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions, pages 691–698, Sydney, Australia, July. Helmut Schmid. 2006. Trace prediction and recovery with unlexicalized PCFGs and slash features. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 177–184, Sydney, Australia, July. Mihai Surdeanu, Richard Johansson, Adam Meyers, Llu´ıs M`arquez, and Joakim Nivre. 2008. The conll 2008 shared task on joint parsing of syntactic and semantic dependencies. In CoNLL 2008: Proceedings of the Twelfth Conference on Computational Natural Language Learning, pages 159–177, Manchester, England, August. Shunsuke Takeno, Masaaki Nagata, and Kazuhide Yamamoto. 2015. Empty category detection using path features and distributed case frames. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1335– 1340, Lisbon, Portugal, September. Le Quang Thang, Hiroshi Noji, and Yusuke Miyao. 2015. Optimal shift-reduce constituent parsing with structured perceptron. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint 939 Conference on Natural Language Processing (Volume 1: Long Papers), pages 1534–1544, Beijing, China, July. Zhiguo Wang and Nianwen Xue. 2014. Joint POS tagging and transition-based constituent parsing in chinese with non-local features. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 733–742, Baltimore, Maryland, June. Taro Watanabe and Eiichiro Sumita. 2015. Transitionbased neural constituent parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1169–1179, Beijing, China, July. Bing Xiang, Xiaoqiang Luo, and Bowen Zhou. 2013. Enlisting the ghost: Modeling empty categories for machine translation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 822– 831, Sofia, Bulgaria, August. Nianwen Xue and Yaqin Yang. 2013. Dependencybased empty category detection via phrase structure trees. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1051–1060, Atlanta, Georgia, June. Yue Zhang and Stephen Clark. 2009. Transitionbased parsing of the chinese treebank using a global discriminative model. In Proceedings of the 11th International Conference on Parsing Technologies (IWPT’09), pages 162–171, Paris, France, October. Muhua Zhu, Yue Zhang, Wenliang Chen, Min Zhang, and Jingbo Zhu. 2013. Fast and accurate shiftreduce constituent parsing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 434–443, Sofia, Bulgaria, August. 940
2016
88
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 941–951, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Siamese CBOW: Optimizing Word Embeddings for Sentence Representations Tom Kenter1 Alexey Borisov1, 2 Maarten de Rijke1 [email protected] [email protected] [email protected] 1 University of Amsterdam, Amsterdam 2 Yandex, Moscow Abstract We present the Siamese Continuous Bag of Words (Siamese CBOW) model, a neural network for efficient estimation of highquality sentence embeddings. Averaging the embeddings of words in a sentence has proven to be a surprisingly successful and efficient way of obtaining sentence embeddings. However, word embeddings trained with the methods currently available are not optimized for the task of sentence representation, and, thus, likely to be suboptimal. Siamese CBOW handles this problem by training word embeddings directly for the purpose of being averaged. The underlying neural network learns word embeddings by predicting, from a sentence representation, its surrounding sentences. We show the robustness of the Siamese CBOW model by evaluating it on 20 datasets stemming from a wide variety of sources. 1 Introduction Word embeddings have proven to be beneficial in a variety of tasks in NLP such as machine translation (Zou et al., 2013), parsing (Chen and Manning, 2014), semantic search (Reinanda et al., 2015; Voskarides et al., 2015), and tracking the meaning of words and concepts over time (Kim et al., 2014; Kenter et al., 2015). It is not evident, however, how word embeddings should be combined to represent larger pieces of text, like sentences, paragraphs or documents. Surprisingly, simply averaging word embeddings of all words in a text has proven to be a strong baseline or feature across a multitude of tasks (Faruqui et al., 2014; Yu et al., 2014; Gershman and Tenenbaum, 2015; Kenter and de Rijke, 2015). Word embeddings, however, are not optimized specifically for representing sentences. In this paper we present a model for obtaining word embeddings that are tailored specifically for the task of averaging them. We do this by directly including a comparison of sentence embeddings—the averaged embeddings of the words they contain—in the cost function of our network. Word embeddings are typically trained in a fast and scalable way from unlabeled training data. As the training data is unlabeled, word embeddings are usually not task-specific. Rather, word embeddings trained on a large training corpus, like the ones from (Collobert and Weston, 2008; Mikolov et al., 2013b) are employed across different tasks (Socher et al., 2012; Kenter and de Rijke, 2015; Hu et al., 2014). These two qualities— (i) being trainable from large quantities of unlabeled data in a reasonable amount of time, and (ii) robust performance across different tasks—are highly desirable and allow word embeddings to be used in many large-scale applications. In this work we aim to optimize word embeddings for sentence representations in the same manner. We want to produce general purpose sentence embeddings that should score robustly across multiple test sets, and we want to leverage large amounts of unlabeled training material. In the word2vec algorithm, Mikolov et al. (2013a) construe a supervised training criterion for obtaining word embeddings from unsupervised data, by predicting, for every word, its surrounding words. We apply this strategy at the sentence level, where we aim to predict a sentence from its adjacent sentences (Kiros et al., 2015; Hill et al., 2016). This allows us to use unlabeled training data, which is easy to obtain; the only restriction is that documents need to be split into sentences and that the order between sentences is preserved. The main research question we address is 941 whether directly optimizing word embeddings for the task of being averaged to produce sentence embeddings leads to word embeddings that are better suited for this task than word2vec does. Therefore, we test the embeddings in an unsupervised learning scenario. We use 20 evaluation sets that stem from a wide variety of sources (newswire, video descriptions, dictionary descriptions, microblog posts). Furthermore, we analyze the time complexity of our method and compare it to our baselines methods. Summarizing, our main contributions are: • We present Siamese CBOW, an efficient neural network architecture for obtaining high-quality word embeddings, directly optimized for sentence representations; • We evaluate the embeddings produced by Siamese CBOW on 20 datasets, originating from a range of sources (newswire, tweets, video descriptions), and demonstrate the robustness of embeddings across different settings. 2 Siamese CBOW We present the Siamese Continuous Bag of Words (CBOW) model, a neural network for efficient estimation of high-quality sentence embeddings. Quality should manifest itself in embeddings of semantically close sentences being similar to one another, and embeddings of semantically different sentences being dissimilar. An efficient and surprisingly successful way of computing a sentence embedding is to average the embeddings of its constituent words. Recent work uses pre-trained word embeddings (such as word2vec and GloVe) for this task, which are not optimized for sentence representations. Following these approaches, we compute sentence embeddings by averaging word embeddings, but we optimize word embeddings directly for the purpose of being averaged. 2.1 Training objective We construct a supervised training criterion by having our network predict sentences occurring next to each other in the training data. Specifically, for a pair of sentences (si, sj), we define a probability p(si, sj) that reflects how likely it is for the sentences to be adjacent to one another in the training data. We compute the probability p(si, sj) using a softmax function: pθ(si, sj) = ecos(sθ i ,sθ j ) P s′∈S ecos(sθ i ,sθ′ ) , (1) where sθ x denotes the embedding of sentence sx, based on the model parameters θ. In theory, the summation in the denominator of Equation 1 should range over all possible sentences S, which is not feasible in practice. Therefore, we replace the set S with the union of the set S+ of sentences that occur next to the sentence si in the training data, and S−, a set of n randomly chosen sentences that are not observed next to the sentence si in the training data. The loss function of the network is categorical cross-entropy: L = − X sj∈{S+ ∪S−} p(si, sj) · log(pθ(si, sj)), where p(·) is the target probability the network should produce, and pθ(·) is the prediction it estimates based on parameters θ, using Equation 1. The target distribution simply is: p(si, sj) =  1 |S+|, if sj ∈S+ 0, if sj ∈S−. I.e., if there are 2 positive examples (the sentences preceding and following the input sentence) and 2 negative examples, the target distribution is (0.5, 0.5, 0, 0). 2.2 Network architecture Figure 1 shows the architecture of the proposed Siamese CBOW network. The input is a projection layer that selects embeddings from a word embedding matrix W (that is shared across inputs) for a given input sentence. The word embeddings are averaged in the next layer, which yields a sentence representation with the same dimensionality as the input word embeddings (the boxes labeled averagei in Figure 1). The cosine similarities between the sentence representation for sentencei and the other sentences are calculated in the penultimate layer and a softmax is applied in the last layer to produce the final probability distribution. 2.3 Training The weights in the word embedding matrix are the only trainable parameters in the Siamese CBOW network. They are updated using stochastic gradient descent. The initial learning rate is monotonically decreased proportionally to the number of training batches. 3 Experimental Setup To test the efficacy of our siamese network for producing sentence embeddings we use multiple 942 word embeddings sentence i word embeddings sentence i-1 w w w average average prediction ... ... word embeddings sentence i+1 average ... negative example 1 average ... negative example n average ... ... cosine layer softmax word embedding matrix W W W W W i,1 i,2 i,... i i-1 i+1 neg 1 neg n Figure 1: Siamese CBOW network architecture. (Input projection layer omitted.) test sets. We use Siamese CBOW to learn word embeddings from an unlabeled corpus. For every sentence pair in the test sets, we compute two sentence representations by averaging the word embeddings of each sentence. Words that are missing from the vocabulary and, hence, have no word embedding, are omitted. The cosine similarity between the two sentence vectors is produced as a final semantic similarity score. As we want a clean way to directly evaluate the embeddings on multiple sets we train our model and the models we compare with on exactly the same training data. We do not compute extra features, perform extra preprocessing steps or incorporate the embeddings in supervised training schemes. Additional steps like these are very likely to improve evaluation scores, but they would obscure our main evaluation purpose in this paper, which is to directly test the embeddings. 3.1 Data We use the Toronto Book Corpus1 to train word embeddings. This corpus contains 74,004,228 already pre-processed sentences in total, which are made up of 1,057,070,918 tokens, originating from 7,087 unique books. In our experiments, we consider tokens appearing 5 times or more, which leads to a vocabulary of 315,643 words. 3.2 Baselines We employ two baselines for producing sentence embeddings in our experiments. We obtain similarity scores between sentence pairs from the baselines in the same way as the ones produced by Siamese CBOW, i.e., we calculate the cosine similarity between the sentence embeddings they produce. 1The corpus can be downloaded from http://www. cs.toronto.edu/˜mbweb/; cf. (Zhu et al., 2015). Word2vec We average word embeddings trained with word2vec.2 We use both architectures, Skipgram and CBOW, and apply default settings: minimum word frequency 5, word embedding size 300, context window 5, sample threshold 10-5, no hierarchical softmax, 5 negative examples. Skip-thought As a second baseline we use the sentence representations produced by the skipthought architecture (Kiros et al., 2015).3 Skipthought is a recently proposed method that learns sentence representations in a different way from ours, by using recurrent neural networks. This allows it to take word order into account. As it trains sentence embeddings from unlabeled data, like we do, it is a natural baseline to consider. Both methods are trained on the Toronto Book Corpus, the same corpus used to train Siamese CBOW. We should note that as we use skipthought vectors as trained by Kiros et al. (2015), skip-thought has an advantage over both word2vec and Siamese CBOW as the vocabulary used for encoding sentences contains 930,913 words, three times the size of the vocabulary that we use. 3.3 Evaluation We use 20 SemEval datasets from the SemEval semantic textual similarity task in 2012, 2013, 2014 and 2015 (Agirre et al., 2012; Agirre et al., 2013; Agirre et al., 2014; Agirre et al., 2015), which consist of sentence pairs from a wide array of sources (e.g., newswire, tweets, video descriptions) that have been manually annotated by multiple human assessors on a 5 point scale (1: semantically unrelated, 5: semantically similar). In the ground truth, the final similarity score for every sentence pair is 2The code is available from https://code. google.com/archive/p/word2vec/. 3The code and the trained models can be downloaded from https://github.com/ryankiros/ skip-thoughts/. 943 Table 1: Results on SemEval datasets in terms of Pearson’s r (Spearman’s r). Highest scores, in terms of Pearson’s r, are displayed in bold. Siamese CBOW runs statistically significantly different from the word2vec CBOW baseline runs are marked with a †. See §3.3 for a discussion of the statistical test used. Dataset w2v skipgram w2v CBOW skip-thought Siamese CBOW 2012 MSRpar .3740 (.3991) .3419 (.3521) .0560 (.0843) .4379† (.4311) MSRvid .5213 (.5519) .5099 (.5450) .5807 (.5829) .4522† (.4759) OnWN .6040 (.6476) .6320 (.6440) .6045 (.6431) .6444† (.6475) SMTeuroparl .3071 (.5238) .3976 (.5310) .4203 (.4999) .4503† (.5449) SMTnews .4487 (.3617) .4462 (.3901) .3911 (.3628) .3902† (.4153) 2013 FNWN .3480 (.3401) .2736 (.2867) .3124 (.3511) .2322† (.2235) OnWN .4745 (.5509) .5165 (.6008) .2418 (.2766) .4985† (.5227) SMT .1838 (.2843) .2494 (.2919) .3378 (.3498) .3312† (.3356) headlines .5935 (.6044) .5730 (.5766) .3861 (.3909) .6534† (.6516) 2014 OnWN .5848 (.6676) .6068 (.6887) .4682 (.5161) .6073† (.6554) deft-forum .3193 (.3810) .3339 (.3507) .3736 (.3737) .4082† (.4188) deft-news .5906 (.5678) .5737 (.5577) .4617 (.4762) .5913† (.5754) headlines .5790 (.5544) .5455 (.5095) .4031 (.3910) .6364† (.6260) images .5131 (.5288) .5056 (.5213) .4257 (.4233) .6497† (.6484) tweet-news .6336 (.6544) .6897 (.6615) .5138 (.5297) .7315† (.7128) 2015 answ-forums .1892 (.1463) .1767 (.1294) .2784 (.1909) .2181 (.1469) answ-students .3233 (.2654) .3344 (.2742) .2661 (.2068) .3671† (.2824) belief .2435 (.2635) .3277 (.3280) .4584 (.3368) .4769 (.3184) headlines .1875 (.0754) .1806 (.0765) .1248 (.0464) .2151† (.0846) images .2454 (.1611) .2292 (.1438) .2100 (.1220) .2560† (.1467) the mean of the annotator judgements, and as such can be a floating point number like 2.685. The evaluation metric used by SemEval, and hence by us, is Pearson’s r. As Spearman’s r is often reported as well, we do so too. Statistical significance To see whether Siamese CBOW yields significantly different scores for the same input sentence pairs from word2vec CBOW—the method it is theoretically most similar to—we compute Wilcoxon signed-rank test statistics between all runs on all evaluation sets. Runs are considered statistically significantly different for p-values < 0.0001. 3.4 Network To comply with results reported in other research (Mikolov et al., 2013b; Kusner et al., 2015) we fix the embedding size to 300 and only consider words appearing 5 times or more in the training corpus. We use 2 negative examples (see §4.2.2 for an analysis of different settings). The embeddings are initialized randomly, by drawing from a normal distribution with µ = 0.0 and σ = 0.01. The batch size is 100. The initial learning rate α is 0.0001, which we obtain by observing the loss on the training data. Training consists of one epoch. We use Theano (Theano Development Team, 2016) to implement our network.4 We ran our experiments on GPUs in the DAS5 cluster (Bal et al., 2016). 4The code for Siamese CBOW is available under an open-source license at https://bitbucket.org/ TomKenter/siamese-cbow. 944 4 Results In this section we present the results of our experiments, and analyze the stability of Siamese CBOW with respect to its (hyper)parameters. 4.1 Main experiments In Table 1, the results of Siamese CBOW on 20 SemEval datasets are displayed, together with the results of the baseline systems. As we can see from the table, Siamese CBOW outperforms the baselines in the majority of cases (14 out of 20). The very low scores of skip-thought on MSRpar appear to be a glitch, which we will ignore. It is interesting to see that for the set with the highest average sentence length (2013 SMT, with 24.7 words per sentence on average) Siamese CBOW is very close to skip-thought, the best performing baseline. In terms of lexical term overlap, unsurprisingly, all methods have trouble with the sets with little overlap (2013 FNWN, 2015 answers-forums, which both have 7% lexical overlap). It is interesting to see, however, that for the next two sets (2015 belief and 2012 MSRpar, 11% and 14% overlap respectively) Siamese CBOW manages to get the best performance. The highest performance on all sets is 0.7315 Pearson’s r of Siamese CBOW on the 2014 tweet-news set. This figure is not very far from the best performing SemEval run that year which has 0.792 Pearson’s r. This is remarkable as Siamese CBOW is completely unsupervised, while the NTNU system which scored best on this set (Lynum et al., 2014) was optimized using multiple training sets. In recent work, Hill et al. (2016) present FastSent, a model similar to ours (see §5 for a more elaborate discussion); results are not reported for all evaluation sets we use, and hence, we compare the results of FastSent and Siamese CBOW separately, in Table 2. FastSent and Siamese CBOW each outperform the other on half of the evaluation sets, which clearly suggests that the differences between the two methods are complementary.5 4.2 Analysis Next, we investigate the stability of Siamese CBOW with respect to its hyper-parameters. In 5The comparison is to be interpreted with caution as it is not evident what vocabulary was used for the experiments in (Hill et al., 2016); hence, the differences observed here might simply be due to differences in vocabulary coverage. Table 2: Results on SemEval 2014 datasets in terms of Pearson’s r (Spearman’s r). Highest scores (in Pearson’s r) are displayed in bold. FastSent results are reprinted from (Hill et al., 2016) where they are reported in two-digit precision. Dataset FastSent Siamese CBOW OnWN .74 (.70) .6073 (.6554) deft-forum .41 (.36) .4082 (.4188) deft-news .58 (.59) .5913 (.5754) headlines .57 (.59) .6364 (.6260) images .74 (.78) .6497 (.6484) tweet-news .63 (.66) .7315 (.7128) particular, we look into stability across iterations, different numbers of negative examples, and the dimensionality of the embeddings. Other parameter settings are set as reported in §3.4. 4.2.1 Performance across iterations Ideally, the optimization criterion of a learning algorithm ranges over the full domain of its loss function. As discussed in §2, our loss function only observes a sample. As such, convergence is not guaranteed. Regardless, an ideal learning system should not fluctuate in terms of performance relative to the amount of training data it observes, provided this amount is substantial: as training proceeds the performance should stabilize. To see whether the performance of Siamese CBOW fluctuates during training we monitor it during 5 epochs; at every 10,000,000 examples, and at the end of every epoch. Figure 2 displays the results for all 20 datasets. We observe that on the majority of datasets the performance shows very little variation. There are three exceptions. The performance on the 2014 deft-news dataset steadily decreases while the performance on 2013 OnWN steadily increases, though both seem to stabilize at the end of epoch 5. The most notable exception, however, is 2012 MSRvid, where the score, after an initial increase, drops consistently. This effect might be explained by the fact that this evaluation set primarily consists of very short sentences—it has the lowest average sentence length of all set: 6.63 with a standard deviation of 1.812. Therefore, a 300-dimensional representation appears too large for this dataset; this hypothesis is supported by the fact that 200dimensional embeddings work slightly better for this dataset (see Figure 4). 945 Epoch 1 - batch 2 Epoch 1 - batch 4 Epoch 1 - batch 6 End of epoch 1 Epoch 2 - batch 2 Epoch 2 - batch 4 Epoch 2 - batch 6 End of epoch 2 Epoch 3 - batch 2 Epoch 3 - batch 4 Epoch 3 - batch 6 End of epoch 3 Epoch 4 - batch 2 Epoch 4 - batch 4 Epoch 4 - batch 6 End of epoch 4 Epoch 5 - batch 2 Epoch 5 - batch 4 Epoch 5 - batch 6 End of epoch 5 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Pearson's r 2012 MSRpar 2012 MSRvid 2012 OnWN 2012 SMTeuroparl 2012 SMTnews 2013 FNWN 2013 OnWN 2013 SMT 2013 headlines 2014 OnWN 2014 deft-forum 2014 deft-news 2014 headlines 2014 images 2014 tweet-news 2015 answers-forums 2015 answers-students 2015 belief 2015 headlines 2015 images Figure 2: Performance of Siamese CBOW across 5 iterations. 4.2.2 Number of negative examples In Figure 3, the results of Siamese CBOW in terms of Pearson’s r are plotted for different numbers of negative examples. We observe that on most sets, the number of negative examples has limited effect on the performance of Siamese CBOW. Choosing a higher number, like 10, occasionally leads to slightly better performance, e.g., on the 2013 FNWN set. However, a small number like 1 or 2 typically suffices, and is sometimes markedly better, e.g., in the case of the 2015 belief set. As 2012 MSRpar 2012 MSRvid 2012 OnWN 2012 SMTeuroparl 2012 SMTnews 2013 FNWN 2013 OnWN 2013 SMT 2013 headlines 2014 OnWN 2014 deft-forum 2014 deft-news 2014 headlines 2014 images 2014 tweet-news 2015 answ-forums 2015 answ-students 2015 belief 2015 headlines 2015 images 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Pearson's r neg 1 neg 2 neg 5 neg 10 Figure 3: Performance of Siamese CBOW with different numbers of negative examples. a high number of negative examples comes at a substantial computational cost, we conclude from the findings presented here that, although Siamese CBOW is robust against different settings of this parameter, setting the number of negative examples to 1 or 2 should be the default choice. 4.2.3 Number of dimensions Figure 4 plots the results of Siamese CBOW for different numbers of vector dimensions. We observe from the figure that for some sets (most notably 2014 deft-forum, 2015 answ-forums and 2015 belief) increasing the number of embedding dimensions consistently yields higher performance. A dimensionality that is too low (50 or 2012 MSRpar 2012 MSRvid 2012 OnWN 2012 SMTeuroparl 2012 SMTnews 2013 FNWN 2013 OnWN 2013 SMT 2013 headlines 2014 OnWN 2014 deft-forum 2014 deft-news 2014 headlines 2014 images 2014 tweet-news 2015 answ-forums 2015 answ-students 2015 belief 2015 headlines 2015 images 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Pearson's r 50d 100d 200d 300d 600d 1200d Figure 4: Performance of Siamese CBOW across number of embedding dimensions. 100) invariably leads to inferior results. As, similar to a higher number of negative examples, a higher embedding dimension leads to higher computational costs, we conclude from these findings 946 that a moderate number of dimensions (200 or 300) is to be preferred. 4.3 Time complexity For learning systems, time complexity comes into play in the training phase and in the prediction phase. For an end system employing sentence embeddings, the complexity at prediction time is the most crucial factor, which is why we omit an analysis of training complexity. We focus on comparing the time complexity for generating sentence embeddings for Siamese CBOW, and compare it to the baselines we use. The complexity of all algorithms we consider is O(n), i.e., linear in the number of input terms. As in practice the number of arithmetic operations is the critical factor in determining computing time, we will now focus on these. Both word2vec and the Siamese CBOW compute embeddings of a text T = t1, . . . , t|T| by averaging the term embeddings. This requires |T|−1 vector additions, and 1 multiplication by a scalar value (namely, 1/|T|). The skip-thought model is a recurrent neural network with GRU cells, which computes a set of equations for every term t in T, which we reprint for reference (Kiros et al., 2015): rt = σ(Wrxt + Urht−1) zt = σ(Wzxt + Uzht−1) h t = tanh(Wxt + U(rt ⊙ht−1)) ht = (1 −zt) ⊙ht−1 + zt ⊙h t As we can see from the formulas, there are 5|T| vector additions (+/-), 4|T| element-wise multiplications by a vector, 3|T| element-wise operations and 6|T| matrix multiplications, of which the latter, the matrix multiplications, are most expensive. This considerable difference in numbers of arithmetic operations is also observed in practice. We run tests on a single CPU, using identical code for extracting sentences from the evaluation sets, Table 3: Time spent per method on all 20 SemEval datasets, 17,608 sentence pairs, and the average time spent on a single sentence pair (time in seconds unless indicated otherwise). 20 sets 1 pair Siamese CBOW (300d) 7.7 0.0004 word2vec (300d) 7.0 0.0004 skip-thought (1200d) 98,804.0 5.6 for every method. The sentence pairs are presented one by one to the models. We disregard the time it takes to load models. Speedups might of course be gained for all methods by presenting the sentences in batches to the models, by computing sentence representations in parallel and by running code on a GPU. However, as we are interested in the differences between the systems, we run the most simple and straightforward scenario. Table 3 lists the number of seconds each method takes to generate and compare sentence embeddings for an input sentence pair. The difference between word2vec and Siamese CBOW is because of a different implementation of word lookup. We conclude from the observations presented here, together with the results in §4.1, that in a setting where speed at prediction time is pivotal, simple averaging methods like word2vec or Siamese CBOW are to be preferred over more involved methods like skip-thought. 4.4 Qualitative analysis As Siamese CBOW directly averages word embeddings for sentences, we expect it to learn that words with little semantic impact have a low vector norm. Indeed, we find that the 10 words with lowest vector norm are to, of, and, the, a, in, that, with, on, and as. At the other side of the spectrum we find many personal pronouns: had, they, we, me, my, he, her, you, she, I, which is natural given that the corpus on which we train consists of fiction, which typically contains dialogues. It is interesting to see what the differences in related words are between Siamese CBOW and word2vec when trained on the same corpus. For example, for a cosine similarity > 0.6, the words related to her in word2vec space are she, his, my and hers. For Siamese CBOW, the only closely related word is she. Similarly, for the word me, word2vec finds him as most closely related word, while Siamese CBOW comes up with I and my. It seems from these few examples that Siamese CBOW learns to be very strict in choosing which words to relate to each other. From the results presented in this section we conclude that optimizing word embeddings for the task of being averaged across sentences with Siamese CBOW leads to embeddings that are effective in a large variety of settings. Furthermore, Siamese CBOW is robust to different parameter settings and its performance is stable across itera947 tions. Lastly, we show that Siamese CBOW is fast and efficient in computing sentence embeddings at prediction time. 5 Related Work A distinction can be made between supervised approaches for obtaining representations of short texts, where a model is optimised for a specific scenario, given a labeled training set, and unsupervised methods, trained on unlabeled data, that aim to capture short text semantics that are robust across tasks. In the first setting, word vectors are typically used as features or network initialisations (Kenter and de Rijke, 2015; Hu et al., 2014; Severyn and Moschitti, 2015; Yin and Sch¨utze, 2015). Our work can be classified in the latter category of unsupervised approaches. Many models related to the one we present here are used in a multilingual setting (Hermann and Blunsom, 2014b; Hermann and Blunsom, 2014a; Lauly et al., 2014). The key difference between this work and ours is that in a multilingual setting the goal is to predict, from a distributed representation of an input sentence, the same sentence in a different language, whereas our goals is to predict surrounding sentences. Wieting et al. (2016) apply a model similar to ours in a related but different setting where explicit semantic knowledge is leveraged. As in our setting, word embeddings are trained by averaging them. However, unlike in our proposal, a margin-based loss function is used, which involves a parameter that has to be tuned. Furthermore, to select negative examples, at every training step, a computationally expensive comparison is made between all sentences in the training batch. The most crucial difference is that a large set of phrase pairs explicitly marked for semantic similarity has to be available as training material. Obtaining such high-quality training material is non-trivial, expensive and limits an approach to settings for which such material is available. In our work, we leverage unlabeled training data, of which there is a virtually unlimited amount. As detailed in §2, our network predicts a sentence from its neighbouring sentences. The notion of learning from context sentences is also applied in (Kiros et al., 2015), where a recurrent neural network is employed. Our way of averaging the vectors of words contained in a sentence is more similar to the CBOW architecture of word2vec (Mikolov et al., 2013a), in which all context word vectors are aggregated to predict the one omitted word. A crucial difference between our approach and the word2vec CBOW approach is that we compare sentence representations directly, rather than comparing a (partial) sentence representation to a word representation. Given the correspondence between word2vec’s CBOW model and ours, we included it as a baseline in our experiments in §3. As the skip-gram architecture has proven to be a strong baseline too in many settings, we include it too. Yih et al. (2011) also propose a siamese architecture. Short texts are represented by tf-idf vectors and a linear combination of input weights is learnt by a two-layer fully connected network, which is used to represent the input text. The cosine similarity between pairs of representations is computed, but unlike our proposal, the differences between similarities of a positive and negative sentence pair are combined in a logistic loss function. Finally, independently from our work, Hill et al. (2016) also present a log-linear model. Rather than comparing sentence representations to each other, as we propose, words in one sentence are compared to the representation of another sentence. As both input and output vectors are learnt, while we tie the parameters across the entire model, Hill et al. (2016)’s model has twice as many parameters as ours. Most importantly, however, the cost function used in (Hill et al., 2016) is crucially different from ours. As words in surrounding sentences are being compared to a sentence representation, the final layer of their network produces a softmax over the entire vocabulary. This is fundamentally different from the final softmax over cosines between sentence representations that we propose. Furthermore, the softmax over the vocabulary is, obviously, of vocabulary size, and hence grows when bigger vocabularies are used, causing additional computational cost. In our case, the size of the softmax is the number of positive plus negative examples (see §2.1). When the vocabulary grows, this size is unaffected. 6 Conclusion We have presented Siamese CBOW, a neural network architecture that efficiently learns word embeddings optimized for producing sentence representations. The model is trained using only unla948 beled text data. It predicts, from an input sentence representation, the preceding and following sentence. We evaluated the model on 20 test sets and show that in a majority of cases, 14 out of 20, Siamese CBOW outperforms a word2vec baseline and a baseline based on the recently proposed skip-thought architecture. As further analysis on various choices of parameters show that the method is stable across settings, we conclude that Siamese CBOW provides a robust way of generating high-quality sentence representations. Word and sentence embeddings are ubiquitous and many different ways of using them in supervised tasks have been proposed. It is beyond the scope of this paper to provide a comprehensive analysis of all supervised methods using word or sentence embeddings and the effect Siamese CBOW would have on them. However, it would be interesting to see how Siamese CBOW embeddings would affect results in supervised tasks. Lastly, although we evaluated Siamese CBOW on sentence pairs, there is no theoretical limitation restricting it to sentences. It would be interesting to see how embeddings for larger pieces of texts, such as documents, would perform in document clustering or filtering tasks. Acknowledgments The authors wish to express their gratitude for the valuable advice and relevant pointers of the anonymous reviewers. Many thanks to Christophe Van Gysel for implementation-related help. This research was supported by Ahold, Amsterdam Data Science, the Bloomberg Research Grant program, the Dutch national program COMMIT, Elsevier, the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement nr 312827 (VOX-Pol), the ESF Research Network Program ELIAS, the Royal Dutch Academy of Sciences (KNAW) under the Elite Network Shifts project, the Microsoft Research Ph.D. program, the Netherlands eScience Center under project number 027.012.105, the Netherlands Institute for Sound and Vision, the Netherlands Organisation for Scientific Research (NWO) under project nrs 727.011.005, 612.001.116, HOR-11-10, 640.006.013, 612.066.930, CI-1425, SH-322-15, 652.002.001, 612.001.551, the Yahoo Faculty Research and Engagement Program, and Yandex. All content represents the opinion of the authors, which is not necessarily shared or endorsed by their respective employers and/or sponsors. References Eneko Agirre, Mona Diab, Daniel Cer, and Aitor Gonzalez-Agirre. 2012. Semeval-2012 task 6: A pilot on semantic textual similarity. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 385–393. Eneko Agirre, Daniel Cer, Mona Diab, Aitor GonzalezAgirre, and Weiwei Guo. 2013. sem 2013 shared task: Semantic textual similarity, including a pilot on typed-similarity. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task (*SEM 2013), pages 32–43. Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2014. Semeval-2014 task 10: Multilingual semantic textual similarity. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 81–91. Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, I Nigo Lopez-Gazpio, Montse Maritxalar, Rada Mihalcea, German Rigau, Larraitz Uria, and Janyce Wiebe. 2015. Semeval-2015 task 2: Semantic textual similarity, english, spanish and pilot on interpretability. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 252–263. Henri Bal, Dick Epema, Cees de Laat, Rob van Nieuwpoort, John Romein, Frank Seinstra, Cees Snoek, and Harry Wijshoff. 2016. A medium-scale distributed system for computer science research: Infrastructure for the long term. Computer, 49(5):54– 63. Danqi Chen and Christopher D Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), pages 740–750. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning (ICML 2008), pages 160–167. Manaal Faruqui, Jesse Dodge, Sujay K Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2014. Retrofitting word vectors to semantic lexicons. In Proceedings of the North American Chapter of the 949 Association for Computational Linguistics (NAACL 2014). Samuel J. Gershman and Joshua B. Tenenbaum. 2015. Phrase similarity in humans and machines. In Proceedings of the 37th Annual Conference of the Cognitive Science Society, pages 776–781. Karl Moritz Hermann and Phil Blunsom. 2014a. Multilingual distributed representations without word alignment. In Proceedings of the International Conference on Learning Representations (ICLR 2014). Karl Moritz Hermann and Phil Blunsom. 2014b. Multilingual models for compositional distributed semantics. In Proceeedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL 2014), pages 58–68. Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from unlabelled data. In Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL 2016). Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. In Advances in Neural Information Processing Systems (NIPS 2014), pages 2042–2050. Tom Kenter and Maarten de Rijke. 2015. Short text similarity with word embeddings. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management (CIKM 2015), pages 1411–1420. Tom Kenter, Melvin Wevers, Pim Huijnen, and Maarten de Rijke. 2015. Ad hoc monitoring of vocabulary shifts over time. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management (CIKM 2015), pages 1191–1200. Yoon Kim, I Yi-Chiu., Kentaro Hanaki, Darshan Hegde, and Slav Petrov. 2014. Temporal analysis of language through neural language models. Proceeedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL 2014), pages 61–65. Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in Neural Information Processing Systems 28 (NIPS 2015), pages 3294–3302. Curran Associates, Inc. Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Q Weinberger. 2015. From word embeddings to document distances. In Proceedings of the 32nd International Conference on Machine Learning (ICML 2015), pages 957–966. Stanislas Lauly, Hugo Larochelle, Mitesh Khapra, Balaraman Ravindran, Vikas C Raykar, and Amrita Saha. 2014. An autoencoder approach to learning bilingual word representations. In Advances in Neural Information Processing Systems (NIPS 2014), pages 1853–1861. Andr´e Lynum, Partha Pakray, Bj¨orn Gamb¨ack, and Sergio Jimenez. 2014. Ntnu: Measuring semantic similarity with sublexical feature representations and soft cardinality. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 448–453. Tomas Mikolov, Kai Chen, Greg S. Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv e-prints, 1301.3781. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems (NIPS 2013), pages 3111–3119. Ridho Reinanda, Edgar Meij, and Maarten de Rijke. 2015. Mining, ranking and recommending entity aspects. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2015), pages 263–272. Aliaksei Severyn and Alessandro Moschitti. 2015. Learning to rank short text pairs with convolutional deep neural networks. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2015), pages 373–382. Richard Socher, Brody Huval, Christopher D Manning, and Andrew Y Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLPCoNLL 2012), pages 1201–1211. Theano Development Team. 2016. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688. Nikos Voskarides, Edgar Meij, Manos Tsagkias, Maarten de Rijke, and Wouter Weerkamp. 2015. Learning to explain entity relationships in knowledge graphs. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and The 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing (ACLIJCNLP 2015), pages 564–574. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Towards universal paraphrastic sentence embeddings. Proceedings of the International Conference on Learning Representations (ICLR 2016). 950 Wentau Yih, Kristina Toutanova, John C. Platt, and Christopher Meek. 2011. Learning discriminative projections for text similarity measures. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning, pages 247–256. Wenpeng Yin and Hinrich Sch¨utze. 2015. Convolutional neural network for paraphrase identification. In Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL 2015), pages 901–911. Lei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. 2014. Deep learning for answer sentence selection. In NIPS 2014 Deep Learning and Representation Learning Workshop. Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE International Conference on Computer Vision, pages 19–27. Will Y. Zou, Richard Socher, Daniel M. Cer, and Christopher D. Manning. 2013. Bilingual word embeddings for phrase-based machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2013), pages 1393–1398. 951
2016
89
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 86–96, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Improving Neural Machine Translation Models with Monolingual Data Rico Sennrich and Barry Haddow and Alexandra Birch School of Informatics, University of Edinburgh {rico.sennrich,a.birch}@ed.ac.uk, [email protected] Abstract Neural Machine Translation (NMT) has obtained state-of-the art performance for several language pairs, while only using parallel data for training. Targetside monolingual data plays an important role in boosting fluency for phrasebased statistical machine translation, and we investigate the use of monolingual data for NMT. In contrast to previous work, which combines NMT models with separately trained language models, we note that encoder-decoder NMT architectures already have the capacity to learn the same information as a language model, and we explore strategies to train with monolingual data without changing the neural network architecture. By pairing monolingual training data with an automatic backtranslation, we can treat it as additional parallel training data, and we obtain substantial improvements on the WMT 15 task English↔German (+2.8–3.7 BLEU), and for the low-resourced IWSLT 14 task Turkish→English (+2.1–3.4 BLEU), obtaining new state-of-the-art results. We also show that fine-tuning on in-domain monolingual and parallel data gives substantial improvements for the IWSLT 15 task English→German. 1 Introduction Neural Machine Translation (NMT) has obtained state-of-the art performance for several language pairs, while only using parallel data for training. Target-side monolingual data plays an important role in boosting fluency for phrase-based statistiThe research presented in this publication was conducted in cooperation with Samsung Electronics Polska sp. z o.o. Samsung R&D Institute Poland. cal machine translation, and we investigate the use of monolingual data for NMT. Language models trained on monolingual data have played a central role in statistical machine translation since the first IBM models (Brown et al., 1990). There are two major reasons for their importance. Firstly, word-based and phrase-based translation models make strong independence assumptions, with the probability of translation units estimated independently from context, and language models, by making different independence assumptions, can model how well these translation units fit together. Secondly, the amount of available monolingual data in the target language typically far exceeds the amount of parallel data, and models typically improve when trained on more data, or data more similar to the translation task. In (attentional) encoder-decoder architectures for neural machine translation (Sutskever et al., 2014; Bahdanau et al., 2015), the decoder is essentially an RNN language model that is also conditioned on source context, so the first rationale, adding a language model to compensate for the independence assumptions of the translation model, does not apply. However, the data argument is still valid in NMT, and we expect monolingual data to be especially helpful if parallel data is sparse, or a poor fit for the translation task, for instance because of a domain mismatch. In contrast to previous work, which integrates a separately trained RNN language model into the NMT model (Gülçehre et al., 2015), we explore strategies to include monolingual training data in the training process without changing the neural network architecture. This makes our approach applicable to different NMT architectures. The main contributions of this paper are as follows: • we show that we can improve the machine translation quality of NMT systems by mixing monolingual target sentences into the 86 training set. • we investigate two different methods to fill the source side of monolingual training instances: using a dummy source sentence, and using a source sentence obtained via backtranslation, which we call synthetic. We find that the latter is more effective. • we successfully adapt NMT models to a new domain by fine-tuning with either monolingual or parallel in-domain data. 2 Neural Machine Translation We follow the neural machine translation architecture by Bahdanau et al. (2015), which we will briefly summarize here. However, we note that our approach is not specific to this architecture. The neural machine translation system is implemented as an encoder-decoder network with recurrent neural networks. The encoder is a bidirectional neural network with gated recurrent units (Cho et al., 2014) that reads an input sequence x = (x1, ..., xm) and calculates a forward sequence of hidden states (−→h 1, ..., −→h m), and a backward sequence (←−h 1, ..., ←−h m). The hidden states −→h j and ←−h j are concatenated to obtain the annotation vector hj. The decoder is a recurrent neural network that predicts a target sequence y = (y1, ..., yn). Each word yi is predicted based on a recurrent hidden state si, the previously predicted word yi−1, and a context vector ci. ci is computed as a weighted sum of the annotations hj. The weight of each annotation hj is computed through an alignment model αij, which models the probability that yi is aligned to xj. The alignment model is a singlelayer feedforward neural network that is learned jointly with the rest of the network through backpropagation. A detailed description can be found in (Bahdanau et al., 2015). Training is performed on a parallel corpus with stochastic gradient descent. For translation, a beam search with small beam size is employed. 3 NMT Training with Monolingual Training Data In machine translation, more monolingual data (or monolingual data more similar to the test set) serves to improve the estimate of the prior probability p(T) of the target sentence T, before taking the source sentence S into account. In contrast to (Gülçehre et al., 2015), who train separate language models on monolingual training data and incorporate them into the neural network through shallow or deep fusion, we propose techniques to train the main NMT model with monolingual data, exploiting the fact that encoder-decoder neural networks already condition the probability distribution of the next target word on the previous target words. We describe two strategies to do this: providing monolingual training examples with an empty (or dummy) source sentence, or providing monolingual training data with a synthetic source sentence that is obtained from automatically translating the target sentence into the source language, which we will refer to as back-translation. 3.1 Dummy Source Sentences The first technique we employ is to treat monolingual training examples as parallel examples with empty source side, essentially adding training examples whose context vector ci is uninformative, and for which the network has to fully rely on the previous target words for its prediction. This could be conceived as a form of dropout (Hinton et al., 2012), with the difference that the training instances that have the context vector dropped out constitute novel training data. We can also conceive of this setup as multi-task learning, with the two tasks being translation when the source is known, and language modelling when it is unknown. During training, we use both parallel and monolingual training examples in the ratio 1-to-1, and randomly shuffle them. We define an epoch as one iteration through the parallel data set, and resample from the monolingual data set for every epoch. We pair monolingual sentences with a single-word dummy source side <null> to allow processing of both parallel and monolingual training examples with the same network graph.1 For monolingual minibatches2, we freeze the network parameters of the encoder and the attention model. One problem with this integration of monolin1One could force the context vector ci to be 0 for monolingual training instances, but we found that this does not solve the main problem with this approach, discussed below. 2For efficiency, Bahdanau et al. (2015) sort sets of 20 minibatches according to length. This also groups monolingual training instances together. 87 gual data is that we cannot arbitrarily increase the ratio of monolingual training instances, or finetune a model with only monolingual training data, because different output layer parameters are optimal for the two tasks, and the network ‘unlearns’ its conditioning on the source context if the ratio of monolingual training instances is too high. 3.2 Synthetic Source Sentences To ensure that the output layer remains sensitive to the source context, and that good parameters are not unlearned from monolingual data, we propose to pair monolingual training instances with a synthetic source sentence from which a context vector can be approximated. We obtain these through back-translation, i.e. an automatic translation of the monolingual target text into the source language. During training, we mix synthetic parallel text into the original (human-translated) parallel text and do not distinguish between the two: no network parameters are frozen. Importantly, only the source side of these additional training examples is synthetic, and the target side comes from the monolingual corpus. 4 Evaluation We evaluate NMT training on parallel text, and with additional monolingual data, on English↔German and Turkish→English, using training and test data from WMT 15 for English↔German, IWSLT 15 for English→German, and IWSLT 14 for Turkish→English. 4.1 Data and Methods We use Groundhog3 as the implementation of the NMT system for all experiments (Bahdanau et al., 2015; Jean et al., 2015a). We generally follow the settings and training procedure described by Sennrich et al. (2016). For English↔German, we report case-sensitive BLEU on detokenized text with mteval-v13a.pl for comparison to official WMT and IWSLT results. For Turkish→English, we report case-sensitive BLEU on tokenized text with multi-bleu.perl for comparison to results by Gülçehre et al. (2015). Gülçehre et al. (2015) determine the network vocabulary based on the parallel training data, 3github.com/sebastien-j/LV_groundhog dataset sentences WMTparallel 4 200 000 WITparallel 200 000 WMTmono_de 160 000 000 WMTsynth_de 3 600 000 WMTmono_en 118 000 000 WMTsynth_en 4 200 000 Table 1: English↔German training data. and replace out-of-vocabulary words with a special UNK symbol. They remove monolingual sentences with more than 10% UNK symbols. In contrast, we represent unseen words as sequences of subword units (Sennrich et al., 2016), and can represent any additional training data with the existing network vocabulary that was learned on the parallel data. In all experiments, the network vocabulary remains fixed. 4.1.1 English↔German We use all parallel training data provided by WMT 2015 (Bojar et al., 2015)4. We use the News Crawl corpora as additional training data for the experiments with monolingual data. The amount of training data is shown in Table 1. Baseline models are trained for a week. Ensembles are sampled from the last 4 saved models of training (saved at 12h-intervals). Each model is fine-tuned with fixed embeddings for 12 hours. For the experiments with synthetic parallel data, we back-translate a random sample of 3 600 000 sentences from the German monolingual data set into English. The German→English system used for this is the baseline system (parallel). Translation took about a week on an NVIDIA Titan Black GPU. For experiments in German→English, we back-translate 4 200 000 monolingual English sentences into German, using the English→German system +synthetic. Note that we always use single models for backtranslation, not ensembles. We leave it to future work to explore how sensitive NMT training with synthetic data is to the quality of the backtranslation. We tokenize and truecase the training data, and represent rare words via BPE (Sennrich et al., 2016). Specifically, we follow Sennrich et al. (2016) in performing BPE on the joint vocabulary with 89 500 merge operations. The network vo4http://www.statmt.org/wmt15/ 88 dataset sentences WIT 160 000 SETimes 160 000 Gigawordmono 177 000 000 Gigawordsynth 3 200 000 Table 2: Turkish→English training data. cabulary size is 90 000. We also perform experiments on the IWSLT 15 test sets to investigate a cross-domain setting.5 The test sets consist of TED talk transcripts. As indomain training data, IWSLT provides the WIT3 parallel corpus (Cettolo et al., 2012), which also consists of TED talks. 4.1.2 Turkish→English We use data provided for the IWSLT 14 machine translation track (Cettolo et al., 2014), namely the WIT3 parallel corpus (Cettolo et al., 2012), which consists of TED talks, and the SETimes corpus (Tyers and Alperen, 2010).6 After removal of sentence pairs which contain empty lines or lines with a length ratio above 9, we retain 320 000 sentence pairs of training data. For the experiments with monolingual training data, we use the English LDC Gigaword corpus (Fifth Edition). The amount of training data is shown in Table 2. With only 320 000 sentences of parallel data available for training, this is a much lower-resourced translation setting than English↔German. Gülçehre et al. (2015) segment the Turkish text with the morphology tool Zemberek, followed by a disambiguation of the morphological analysis (Sak et al., 2007), and removal of non-surface tokens produced by the analysis. We use the same preprocessing7. For both Turkish and English, we represent rare words (or morphemes in the case of Turkish) as character bigram sequences (Sennrich et al., 2016). The 20 000 most frequent words (morphemes) are left unsegmented. The networks have a vocabulary size of 23 000 symbols. To obtain a synthetic parallel training set, we back-translate a random sample of 3 200 000 sentences from Gigaword. We use an English→Turkish NMT system trained with the same settings as the Turkish→English baseline system. 5http://workshop2015.iwslt.org/ 6http://workshop2014.iwslt.org/ 7github.com/orhanf/zemberekMorphTR We found overfitting to be a bigger problem than with the larger English↔German data set, and follow Gülçehre et al. (2015) in using Gaussian noise (stddev 0.01) (Graves, 2011), and dropout on the output layer (p=0.5) (Hinton et al., 2012). We also use early stopping, based on BLEU measured every three hours on tst2010, which we treat as development set. For Turkish→English, we use gradient clipping with threshold 5, following Gülçehre et al. (2015), in contrast to the threshold 1 that we use for English↔German, following Jean et al. (2015a). 4.2 Results 4.2.1 English→German WMT 15 Table 3 shows English→German results with WMT training and test data. We find that mixing parallel training data with monolingual data with a dummy source side in a ratio of 1-1 improves quality by 0.4–0.5 BLEU for the single system, 1 BLEU for the ensemble. We train the system for twice as long as the baseline to provide the training algorithm with a similar amount of parallel training instances. To ensure that the quality improvement is due to the monolingual training instances, and not just increased training time, we also continued training our baseline system for another week, but saw no improvements in BLEU. Including synthetic data during training is very effective, and yields an improvement over our baseline by 2.8–3.4 BLEU. Our best ensemble system also outperforms a syntax-based baseline (Sennrich and Haddow, 2015) by 1.2–2.1 BLEU. We also substantially outperform NMT results reported by Jean et al. (2015a) and Luong et al. (2015), who previously reported SOTA result.8 We note that the difference is particularly large for single systems, since our ensemble is not as diverse as that of Luong et al. (2015), who used 8 independently trained ensemble components, whereas we sampled 4 ensemble components from the same training run. 4.2.2 English→German IWSLT 15 Table 4 shows English→German results on IWSLT test sets. IWSLT test sets consist of TED talks, and are thus very dissimilar from the WMT 8Luong et al. (2015) report 20.9 BLEU (tokenized) on newstest2014 with a single model, and 23.0 BLEU with an ensemble of 8 models. Our best single system achieves a tokenized BLEU (as opposed to untokenized scores reported in Table 3) of 23.8, and our ensemble reaches 25.0 BLEU. 89 BLEU name training instances newstest2014 newstest2015 single ens-4 single ens-4 syntax-based (Sennrich and Haddow, 2015) 22.6 24.4 Neural MT (Jean et al., 2015b) 22.4 parallel 37m (parallel) 19.9 20.4 22.8 23.6 +monolingual 49m (parallel) / 49m (monolingual) 20.4 21.4 23.2 24.6 +synthetic 44m (parallel) / 36m (synthetic) 22.7 23.8 25.7 26.5 Table 3: English→German translation performance (BLEU) on WMT training/test sets. Ens-4: ensemble of 4 models. Number of training instances varies due to differences in training time and speed. name fine-tuning BLEU data instances tst2013 tst2014 tst2015 NMT (Luong and Manning, 2015) (single model) 29.4 NMT (Luong and Manning, 2015) (ensemble of 8) 31.4 27.6 30.1 1 parallel 25.2 22.6 24.0 2 +synthetic 26.5 23.5 25.5 3 2+WITmono_de WMTparallel / WITmono 200k/200k 26.6 23.6 25.4 4 2+WITsynth_de WITsynth 200k 28.2 24.4 26.7 5 2+WITparallel WIT 200k 30.4 25.9 28.4 Table 4: English→German translation performance (BLEU) on IWSLT test sets (TED talks). Single models. test sets, which are news texts. We investigate if monolingual training data is especially valuable if it can be used to adapt a model to a new genre or domain, specifically adapting a system trained on WMT data to translating TED talks. Systems 1 and 2 correspond to systems in Table 3, trained only on WMT data. System 2, trained on parallel and synthetic WMT data, obtains a BLEU score of 25.5 on tst2015. We observe that even a small amount of fine-tuning9, i.e. continued training of an existing model, on WIT data can adapt a system trained on WMT data to the TED domain. By back-translating the monolingual WIT corpus (using a German→English system trained on WMT data, i.e. without in-domain knowledge), we obtain the synthetic data set WITsynth. A single epoch of fine-tuning on WITsynth (system 4) results in a BLEU score of 26.7 on tst2015, or an improvement of 1.2 BLEU. We observed no improvement from fine-tuning on WITmono, the monolingual TED corpus with dummy input (system 3). These adaptation experiments with monolingual data are slightly artificial in that parallel training data is available. System 5, which is finetuned with the original WIT training data, obtains a BLEU of 28.4 on tst2015, which is an improve9We leave the word embeddings fixed for fine-tuning. BLEU name 2014 2015 PBSMT (Haddow et al., 2015) 28.8 29.3 NMT (Gülçehre et al., 2015) 23.6 +shallow fusion 23.7 +deep fusion 24.0 parallel 25.9 26.7 +synthetic 29.5 30.4 +synthetic (ensemble of 4) 30.8 31.6 Table 5: German→English translation performance (BLEU) on WMT training/test sets (newstest2014; newstest2015). ment of 2.9 BLEU. While it is unsurprising that in-domain parallel data is most valuable, we find it encouraging that NMT domain adaptation with monolingual data is also possible, and effective, since there are settings where only monolingual in-domain data is available. The best results published on this dataset are by Luong and Manning (2015), obtained with an ensemble of 8 independently trained models. In a comparison of single-model results, we outperform their model on tst2013 by 1 BLEU. 90 4.2.3 German→English WMT 15 Results for German→English on the WMT 15 data sets are shown in Table 5. Like for the reverse translation direction, we see substantial improvements (3.6–3.7 BLEU) from adding monolingual training data with synthetic source sentences, which is substantially bigger than the improvement observed with deep fusion (Gülçehre et al., 2015); our ensemble outperforms the previous state of the art on newstest2015 by 2.3 BLEU. 4.2.4 Turkish→English IWSLT 14 Table 6 shows results for Turkish→English. On average, we see an improvement of 0.6 BLEU on the test sets from adding monolingual data with a dummy source side in a 1-1 ratio10, although we note a high variance between different test sets. With synthetic training data (Gigawordsynth), we outperform the baseline by 2.7 BLEU on average, and also outperform results obtained via shallow or deep fusion by Gülçehre et al. (2015) by 0.5 BLEU on average. To compare to what extent synthetic data has a regularization effect, even without novel training data, we also back-translate the target side of the parallel training text to obtain the training corpus parallelsynth. Mixing the original parallel corpus with parallelsynth (ratio 1-1) gives some improvement over the baseline (1.7 BLEU on average), but the novel monolingual training data (Gigawordmono) gives higher improvements, despite being out-of-domain in relation to the test sets. We speculate that novel in-domain monolingual data would lead to even higher improvements. 4.2.5 Back-translation Quality for Synthetic Data One question that our previous experiments leave open is how the quality of the automatic backtranslation affects training with synthetic data. To investigate this question, we back-translate the same German monolingual corpus with three different German→English systems: • with our baseline system and greedy decoding • with our baseline system and beam search (beam size 12). This is the same system used for the experiments in Table 3. 10We also experimented with higher ratios of monolingual data, but this led to decreased BLEU scores. BLEU DE→EN EN→DE back-translation 2015 2014 2015 none 20.4 23.6 parallel (greedy) 22.3 23.2 26.0 parallel (beam 12) 25.0 23.8 26.5 synthetic (beam 12) 28.3 23.9 26.6 ensemble of 3 24.2 27.0 ensemble of 12 24.7 27.6 Table 7: English→German translation performance (BLEU) on WMT training/test sets (newstest2014; newstest2015). Systems differ in how the synthetic training data is obtained. Ensembles of 4 models (unless specified otherwise). • with the German→English system that was itself trained with synthetic data (beam size 12). BLEU scores of the German→English systems, and of the resulting English→German systems that are trained on the different backtranslations, are shown in Table 7. The quality of the German→English back-translation differs substantially, with a difference of 6 BLEU on newstest2015. Regarding the English→German systems trained on the different synthetic corpora, we find that the 6 BLEU difference in back-translation quality leads to a 0.6–0.7 BLEU difference in translation quality. This is balanced by the fact that we can increase the speed of back-translation by trading off some quality, for instance by reducing beam size, and we leave it to future research to explore how much the amount of synthetic data affects translation quality. We also show results for an ensemble of 3 models (the best single model of each training run), and 12 models (all 4 models of each training run). Thanks to the increased diversity of the ensemble components, these ensembles outperform the ensembles of 4 models that were all sampled from the same training run, and we obtain another improvement of 0.8–1.0 BLEU. 4.3 Contrast to Phrase-based SMT The back-translation of monolingual target data into the source language to produce synthetic parallel text has been previously explored for phrasebased SMT (Bertoldi and Federico, 2009; Lambert et al., 2011). While our approach is technically similar, synthetic parallel data fulfills novel roles 91 name training BLEU data instances tst2011 tst2012 tst2013 tst2014 baseline (Gülçehre et al., 2015) 18.4 18.8 19.9 18.7 deep fusion (Gülçehre et al., 2015) 20.2 20.2 21.3 20.6 baseline parallel 7.2m 18.6 18.2 18.4 18.3 parallelsynth parallel/parallelsynth 6m/6m 19.9 20.4 20.1 20.0 Gigawordmono parallel/Gigawordmono 7.6m/7.6m 18.8 19.6 19.4 18.2 Gigawordsynth parallel/Gigawordsynth 8.4m/8.4m 21.2 21.1 21.8 20.4 Table 6: Turkish→English translation performance (tokenized BLEU) on IWSLT test sets (TED talks). Single models. Number of training instances varies due to early stopping. system BLEU WMT IWSLT parallel 20.1 21.5 +synthetic 20.8 21.6 PBSMT gain +0.7 +0.1 NMT gain +2.9 +1.2 Table 8: Phrase-based SMT results (English→German) on WMT test sets (average of newstest201{4,5}), and IWSLT test sets (average of tst201{3,4,5}), and average BLEU gain from adding synthetic data for both PBSMT and NMT. in NMT. To explore the relative effectiveness of backtranslated data for phrase-based SMT and NMT, we train two phrase-based SMT systems with Moses (Koehn et al., 2007), using only WMTparallel, or both WMTparallel and WMTsynth_de for training the translation and reordering model. Both systems contain the same language model, a 5-gram Kneser-Ney model trained on all available WMT data. We use the baseline features described by Haddow et al. (2015). Results are shown in Table 8. In phrase-based SMT, we find that the use of back-translated training data has a moderate positive effect on the WMT test sets (+0.7 BLEU), but not on the IWSLT test sets. This is in line with the expectation that the main effect of back-translated data for phrasebased SMT is domain adaptation (Bertoldi and Federico, 2009). Both the WMT test sets and the News Crawl corpora which we used as monolingual data come from the same source, a web crawl of newspaper articles.11 In contrast, News Crawl is out-of-domain for the IWSLT test sets. In contrast to phrase-based SMT, which can 11The WMT test sets are held-out from News Crawl. 0 5 10 15 20 25 30 2 4 6 8 training time (training instances ·106) cross-entropy parallel (dev) parallel (train) parallelsynth (dev) parallelsynth (train) Gigawordmono (dev) Gigawordmono (train) Gigawordsynth (dev) Gigawordsynth (train) Figure 1: Turkish→English training and development set (tst2010) cross-entropy as a function of training time (number of training instances) for different systems. make use of monolingual data via the language model, NMT has so far not been able to use monolingual data to great effect, and without requiring architectural changes. We find that the effect of synthetic parallel data is not limited to domain adaptation, and that even out-of-domain synthetic data improves NMT quality, as in our evaluation on IWSLT. The fact that the synthetic data is more effective on the WMT test sets (+2.9 BLEU) than on the IWSLT test sets (+1.2 BLEU) supports the hypothesis that domain adaptation contributes to the effectiveness of adding synthetic data to NMT training. It is an important finding that back-translated data, which is mainly effective for domain adaptation in phrase-based SMT, is more generally useful in NMT, and has positive effects that go beyond domain adaptation. In the next section, we will investigate further reasons for its effectiveness. 92 0 20 40 60 80 2 4 6 8 training time (training instances ·106) cross-entropy WMTparallel (dev) WMTparallel (train) WMTsynth (dev) WMTsynth (train) Figure 2: English→German training and development set (newstest2013) cross-entropy as a function of training time (number of training instances) for different systems. 4.4 Analysis We previously indicated that overfitting is a concern with our baseline system, especially on small data sets of several hundred thousand training sentences, despite the regularization employed. This overfitting is illustrated in Figure 1, which plots training and development set cross-entropy by training time for Turkish→English models. For comparability, we measure training set crossentropy for all models on the same random sample of the parallel training set. We can see that the model trained on only parallel training data quickly overfits, while all three monolingual data sets (parallelsynth, Gigawordmono, or Gigawordsynth) delay overfitting, and give better perplexity on the development set. The best development set cross-entropy is reached by Gigawordsynth. Figure 2 shows cross-entropy for English→German, comparing the system trained on only parallel data and the system that includes synthetic training data. Since more training data is available for English→German, there is no indication that overfitting happens during the first 40 million training instances (or 7 days of training); while both systems obtain comparable training set cross-entropies, the system with synthetic data reaches a lower cross-entropy on the development set. One explanation for this is the domain effect discussed in the previous section. A central theoretical expectation is that monolingual target-side data improves the model’s flusystem produced attested natural parallel 1078 53.4% 74.9% +mono 994 61.6% 84.6% +synthetic 1217 56.4% 82.5% Table 9: Number of words in system output that do not occur in parallel training data (countref = 1168), and proportion that is attested in data, or natural according to native speaker. English→German; newstest2015; ensemble systems. ency, its ability to produce natural target-language sentences. As a proxy to sentence-level fluency, we investigate word-level fluency, specifically words produced as sequences of subword units, and whether NMT systems trained with additional monolingual data produce more natural words. For instance, the English→German systems translate the English phrase civil rights protections as a single compound, composed of three subword units: Bürger|rechts|schutzes12, and we analyze how many of these multi-unit words that the translation systems produce are well-formed German words. We compare the number of words in the system output for the newstest2015 test set which are produced via subword units, and that do not occur in the parallel training corpus. We also count how many of them are attested in the full monolingual corpus or the reference translation, which we all consider ‘natural’. Additionally, the main authors, a native speaker of German, annotated a random subset (n = 100) of unattested words of each system according to their naturalness13, distinguishing between natural German words (or names) such as Literatur|klassen ‘literature classes’, and nonsensical ones such as *As|best|atten (a missspelling of Astbestmatten ‘asbestos mats’). In the results (Table 9), we see that the systems trained with additional monolingual or synthetic data have a higher proportion of novel words attested in the non-parallel data, and a higher proportion that is deemed natural by our annotator. This supports our expectation that additional monolingual data improves the (word-level) fluency of the NMT system. 12Subword boundaries are marked with ‘|’. 13For the annotation, the words were blinded regarding the system that produced them. 93 5 Related Work To our knowledge, the integration of monolingual data for pure neural machine translation architectures was first investigated by (Gülçehre et al., 2015), who train monolingual language models independently, and then integrate them during decoding through rescoring of the beam (shallow fusion), or by adding the recurrent hidden state of the language model to the decoder state of the encoder-decoder network, with an additional controller mechanism that controls the magnitude of the LM signal (deep fusion). In deep fusion, the controller parameters and output parameters are tuned on further parallel training data, but the language model parameters are fixed during the finetuning stage. Jean et al. (2015b) also report on experiments with reranking of NMT output with a 5-gram language model, but improvements are small (between 0.1–0.5 BLEU). The production of synthetic parallel texts bears resemblance to data augmentation techniques used in computer vision, where datasets are often augmented with rotated, scaled, or otherwise distorted variants of the (limited) training set (Rowley et al., 1996). Another similar avenue of research is selftraining (McClosky et al., 2006; Schwenk, 2008). The main difference is that self-training typically refers to scenario where the training set is enhanced with training instances with artificially produced output labels, whereas we start with human-produced output (i.e. the translation), and artificially produce an input. We expect that this is more robust towards noise in the automatic translation. Improving NMT with monolingual source data, following similar work on phrasebased SMT (Schwenk, 2008), remains possible future work. Domain adaptation of neural networks via continued training has been shown to be effective for neural language models by (Ter-Sarkisov et al., 2015), and in work parallel to ours, for neural translation models (Luong and Manning, 2015). We are the first to show that we can effectively adapt neural translation models with monolingual data. 6 Conclusion In this paper, we propose two simple methods to use monolingual training data during training of NMT systems, with no changes to the network architecture. Providing training examples with dummy source context was successful to some extent, but we achieve substantial gains in all tasks, and new SOTA results, via back-translation of monolingual target data into the source language, and treating this synthetic data as additional training data. We also show that small amounts of indomain monolingual data, back-translated into the source language, can be effectively used for domain adaptation. In our analysis, we identified domain adaptation effects, a reduction of overfitting, and improved fluency as reasons for the effectiveness of using monolingual data for training. While our experiments did make use of monolingual training data, we only used a small random sample of the available data, especially for the experiments with synthetic parallel data. It is conceivable that larger synthetic data sets, or data sets obtained via data selection, will provide bigger performance benefits. Because we do not change the neural network architecture to integrate monolingual training data, our approach can be easily applied to other NMT systems. We expect that the effectiveness of our approach not only varies with the quality of the MT system used for back-translation, but also depends on the amount (and similarity to the test set) of available parallel and monolingual data, and the extent of overfitting of the baseline model. Future work will explore the effectiveness of our approach in more settings. Acknowledgments The research presented in this publication was conducted in cooperation with Samsung Electronics Polska sp. z o.o. - Samsung R&D Institute Poland. This project received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement 645452 (QT21). References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In Proceedings of the International Conference on Learning Representations (ICLR). Nicola Bertoldi and Marcello Federico. 2009. Domain adaptation for statistical machine translation with monolingual resources. In Proceedings of the Fourth Workshop on Statistical Machine Translation 94 StatMT 09. Association for Computational Linguistics. Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Barry Haddow, Matthias Huck, Chris Hokamp, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Carolina Scarton, Lucia Specia, and Marco Turchi. 2015. Findings of the 2015 Workshop on Statistical Machine Translation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 1–46, Lisbon, Portugal. Association for Computational Linguistics. P.F. Brown, S.A. Della Pietra, V.J. Della Pietra, F. Jelinek, J.D. Lafferty, R.L. Mercer, and P.S. Roossin. 1990. A Statistical Approach to Machine Translation. Computational Linguistics, 16(2):79–85. Mauro Cettolo, Christian Girardi, and Marcello Federico. 2012. WIT3: Web Inventory of Transcribed and Translated Talks. In Proceedings of the 16th Conference of the European Association for Machine Translation (EAMT), pages 261–268, Trento, Italy. Mauro Cettolo, Jan Niehues, Sebastian Stüker, Luisa Bentivogli, and Marcello Federico. 2014. Report on the 11th IWSLT Evaluation Campaign, IWSLT 2014. In Proceedings of the 11th Workshop on Spoken Language Translation, pages 2–16, Lake Tahoe, CA, USA. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning Phrase Representations using RNN Encoder– Decoder for Statistical Machine Translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724–1734, Doha, Qatar. Association for Computational Linguistics. Alex Graves. 2011. Practical Variational Inference for Neural Networks. In J. Shawe-Taylor, R.S. Zemel, P.L. Bartlett, F. Pereira, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 24, pages 2348–2356. Curran Associates, Inc. Çaglar Gülçehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Loïc Barrault, Huei-Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On Using Monolingual Corpora in Neural Machine Translation. CoRR, abs/1503.03535. Barry Haddow, Matthias Huck, Alexandra Birch, Nikolay Bogoychev, and Philipp Koehn. 2015. The Edinburgh/JHU Phrase-based Machine Translation Systems for WMT 2015. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 126–133, Lisbon, Portugal. Association for Computational Linguistics. Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2012. Improving neural networks by preventing co-adaptation of feature detectors. CoRR, abs/1207.0580. Sébastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015a. On Using Very Large Target Vocabulary for Neural Machine Translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1–10, Beijing, China. Association for Computational Linguistics. Sébastien Jean, Orhan Firat, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015b. Montreal Neural Machine Translation Systems for WMT’15 . In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 134–140, Lisbon, Portugal. Association for Computational Linguistics. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proceedings of the ACL-2007 Demo and Poster Sessions, pages 177–180, Prague, Czech Republic. Association for Computational Linguistics. Patrik Lambert, Holger Schwenk, Christophe Servan, and Sadaf Abdul-Rauf. 2011. Investigations on Translation Model Adaptation Using Monolingual Data. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 284–293, Edinburgh, Scotland. Association for Computational Linguistics. Minh-Thang Luong and Christopher D. Manning. 2015. Stanford Neural Machine Translation Systems for Spoken Language Domains. In Proceedings of the International Workshop on Spoken Language Translation 2015, Da Nang, Vietnam. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective Approaches to Attentionbased Neural Machine Translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412– 1421, Lisbon, Portugal. Association for Computational Linguistics. David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective Self-training for Parsing. In Proceedings of the Main Conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, HLT-NAACL ’06, pages 152–159, New York. Association for Computational Linguistics. Henry Rowley, Shumeet Baluja, and Takeo Kanade. 1996. Neural Network-Based Face Detection. In Computer Vision and Pattern Recognition ’96. 95 Ha¸sim Sak, Tunga Güngör, and Murat Saraçlar. 2007. Morphological Disambiguation of Turkish Text with Perceptron Algorithm. In CICLing 2007, pages 107–118. Holger Schwenk. 2008. Investigations on Large-Scale Lightly-Supervised Training for Statistical Machine Translation. In International Workshop on Spoken Language Translation, pages 182–189. Rico Sennrich and Barry Haddow. 2015. A Joint Dependency Model of Morphological and Syntactic Structure for Statistical Machine Translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2081–2087, Lisbon, Portugal. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural Machine Translation of Rare Words with Subword Units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016), Berlin, Germany. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to Sequence Learning with Neural Networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, pages 3104–3112, Montreal, Quebec, Canada. Alex Ter-Sarkisov, Holger Schwenk, Fethi Bougares, and Loïc Barrault. 2015. Incremental Adaptation Strategies for Neural Network Language Models. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality, pages 48–56, Beijing, China. Association for Computational Linguistics. Francis M. Tyers and Murat S. Alperen. 2010. SETimes: A parallel corpus of Balkan languages. In Workshop on Exploitation of multilingual resources and tools for Central and (South) Eastern European Languages at the Language Resources and Evaluation Conference, pages 1–5. 96
2016
9
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 952–962, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Unanimous Prediction for 100% Precision with Application to Learning Semantic Mappings Fereshte Khani Stanford University [email protected] Martin Rinard MIT [email protected] Percy Liang Stanford University [email protected] Abstract Can we train a system that, on any new input, either says “don’t know” or makes a prediction that is guaranteed to be correct? We answer the question in the affirmative provided our model family is wellspecified. Specifically, we introduce the unanimity principle: only predict when all models consistent with the training data predict the same output. We operationalize this principle for semantic parsing, the task of mapping utterances to logical forms. We develop a simple, efficient method that reasons over the infinite set of all consistent models by only checking two of the models. We prove that our method obtains 100% precision even with a modest amount of training data from a possibly adversarial distribution. Empirically, we demonstrate the effectiveness of our approach on the standard GeoQuery dataset. 1 Introduction If a user asks a system “How many painkillers should I take?”, it is better for the system to say “don’t know” rather than making a costly incorrect prediction. When the system is learned from data, uncertainty pervades, and we must manage this uncertainty properly to achieve our precision requirement. It is particularly challenging since training inputs might not be representative of test inputs due to limited data, covariate shift (Shimodaira, 2000), or adversarial filtering (Nelson et al., 2009; Mei and Zhu, 2015). In this unforgiving setting, can we still train a system that is guaranteed to either abstain or to make the correct prediction? Our present work is motivated by the goal of input output area of Iowa area(IA) cities in Ohio city(OH) cities in Iowa city(IA) mapping 1 mapping 2 mapping k output 1 area(OH) area(OH) output 2 area(OH) area(OH) output k area(OH) OH training examples input area of Ohio Ohio area output area(Ohio) don’t know testing examples unanimity ... ... Figure 1: Given a set of training examples, we compute C, the set of all mappings consistent with the training examples. On an input x, if all mappings in C unanimously predict the same output, we return that output; else we return “don’t know”. building reliable question answering systems and natural language interfaces. Our goal is to learn a semantic mapping from examples of utterancelogical form pairs (Figure 1). More generally, we assume the input x is a bag (multiset) of source atoms (e.g., words {area, of, Ohio}), and the output y is a bag of target atoms (e.g., predicates {area, OH}). We consider learning mappings M that decompose according to the multiset sum: M(x) = ⊎s∈xM(s) (e.g., M({Ohio}) = {OH}, M({area,of,Ohio}) = {area,OH}). The main challenge is that an individual training example (x, y) does not tell us which source atoms map to which target atoms.1 How can a system be 100% sure about something if it has seen only a small number of possibly non-representative examples? Our approach is based on what we call the unanimity principle (Section 2.1). Let M be a model family that contains the true mapping from inputs to outputs. Let C be the subset of mappings that are consistent 1A semantic parser further requires modeling the context dependence of words and the logical form structure joining the predicates. Our framework handles these cases with a different choice of source and target atoms (see Section 4.2). 952 with the training data. If all mappings M ∈C unanimously predict the same output on a test input, then we return that output; else we return “don’t know” (see Figure 1). The unanimity principle provides robustness to the particular input distribution, so that we can tolerate even adversaries (Mei and Zhu, 2015), provided the training outputs are still mostly correct. To operationalize the unanimity principle, we need to be able to efficiently reason about the predictions of all consistent mappings C. To this end, we represent a mapping as a matrix M, where Mst is number of times target atom t (e.g., OH) shows up for each occurrence of the source atom s (e.g., Ohio) in the input. We show that unanimous prediction can be performed by solving two integer linear programs. With a linear programming relaxation (Section 3), we further show that checking unanimity over C can be done very efficiently without any optimization but rather by checking the predictions of just two random mappings, while still guaranteeing 100% precision with probability 1 (Section 3.2). We further relax the linear program to a linear system, which gives us a geometric view of the unanimity: We predict on a new input if it can be expressed as a “linear combination” of the training inputs. As an example, suppose we are given training data consisting of (CI) cities in Iowa, (CO) cities in Ohio, and (AI) area of Iowa (Figure 1). We can compute (AO) area of Ohio by analogy: (AO) = (CO) - (CI) + (AI). Other reasoning patterns fall out from more complex linear combinations. We can handle noisy data (Section 3.4) by asking for unanimity over additional slack variables. We also show how the linear algebraic formulation enables other extensions such as learning from denotations (Section 5.1), active learning (Section 5.2), and paraphrasing (Section 5.3). We validate our methods in Section 4. On artificial data generated from an adversarial distribution with noise, we show that unanimous prediction obtains 100% precision, whereas point estimates fail. On GeoQuery (Zelle and Mooney, 1996), a standard semantic parsing dataset, where our model assumptions are violated, we still obtain 100% precision. We were able to reach 70% recall on recovering predicates and 59% on full logical forms. source atoms target atoms {area, of, Iowa} {area, IA} {cities, in, Ohio} {city, OH} {cities, in, Iowa} {city, IA} mapping 1 cities →{city} in →{} of →{} area →{area} Iowa →{IA} Ohio →{OH} mapping 2 cities →{} in →{city} of →{} area →{area} Iowa →{IA} Ohio →{OH} mapping 3 cities →{city} in →{} of →{area} area →{} Iowa →{IA} Ohio →{OH} mapping 4 cities →{} in →{city} of →{area} area →{} Iowa →{IA} Ohio →{OH} Figure 2: Given the training examples in the top table, there are exactly four mappings consistent with these training examples. 2 Setup We represent an input x (e.g., area of Ohio) as a bag (multiset) of source atoms and an output y (e.g., area(OH)) as a bag of target atoms. In the simplest case, source atoms are words and target atoms are predicates—see Figure 2(top) for an example.2 We assume there is a true mapping M∗from a source atom s (e.g., Ohio) to a bag of target atoms t = M∗(s) (e.g., {OH}). Note that M∗can also map a source atom s to no target atoms (M∗(of) = {}) or multiple target atoms (M∗(grandparent) = {parent, parent}). We extend M∗to bag of source atoms via multiset sum: M∗(x) = ⊎s∈xM∗(s). Of course, we do not know M∗and must estimate it from training data. Our training examples are input-output pairs D = {(x1, y1), . . . , (xn, yn)}. For now, we assume that there is no noise so that yi = M∗(xi); Section 3.4 shows how to deal with noise. Our goal is to output a mapping ˆ M that maps each input x to either a bag of target atoms or “don’t know.” We say that ˆ M has 100% precision if ˆ M(x) = M∗(x) whenever ˆ M(x) is not “don’t know.” The chief difficulty is that the source atoms xi and the target atoms yi are unaligned. While we could try to infer the alignment, we will show that it is unnecessary for obtaining 100% precision. 2.1 Unanimity principle Let M be the set of mappings (which contains the true mapping M∗). Let C be the subset of map2Our semantic parsing experiments (Section 4.2) use more complex source and target atoms to capture some context and structure. 953 S = area of Ohio cities in Iowa " # area of Iowa 1 1 0 0 0 1 cities in Ohio 0 0 1 1 1 0 cities in Iowa 0 0 0 1 1 1 M = area city OH IA     area 1 0 0 0 of 0 0 0 0 Ohio 0 0 1 0 cities 0 1 0 0 in 0 0 0 0 Iowa 0 0 0 1 T = area city OH IA " # area(IA) 1 0 0 1 city(OH) 0 1 1 0 city(IA) 0 1 0 1 Figure 3: Our training data encodes a system of linear equations SM = T, where the rows of S are inputs, the rows of T are the corresponding outputs, and M specifies the mapping between source and target atoms. pings consistent with the training examples. C def = {M ∈M | M(xi) = yi, ∀i = 1, . . . , n} (1) Figure 2 shows the four mappings consistent with the training set in our running example. Let F be the set of safe inputs, those on which all mappings in C agree: F def = {x : |{M(x) : M ∈C}| = 1}. (2) The unanimity principle defines a mapping ˆ M that returns the unanimous output on F and “don’t know” on its complement. This choice obtains the following strong guarantee: Proposition 1. For each safe input x ∈F, we have ˆ M(x) = M∗(x). In other words, M obtains 100% precision. Furthermore, ˆ M obtains the best possible recall given this model family subject to 100% precision, since for any x ̸∈F there are at least two possible outputs generated by consistent mappings, so we cannot safely guess one of them. 3 Linear algebraic formulation To solve the learning problem laid out in the previous section, let us recast the problem in linear algebraic terms. Let ns (nt) be the number of source (target) atom types. First, we can represent the bag x (y) as a ns-dimensional (nt-dimensional) row vector of counts; for example, the vector form of “area of Ohio” is area of Ohio cities in Iowa [ ] 1 1 1 0 0 0 . We represent the mapping M as a non-negative integer-valued matrix, where Mst is the number of times target atom t appears in the bag that source atom s maps to (Figure 3). We also encode the n training examples as matrices: S is an n × ns matrix where the i-th row is xi; T as an n×nt matrix where the i-th row is yi. Given these matrices, we can rewrite the set of consistent mappings (2) as: C = {M ∈Zns×nt ≥0 : SM = T}. (3) See Figure 3 for the matrix formulation of S and T, along with one possible consistent mapping M for our running example. 3.1 Integer linear programming Finding an element of C as defined in (3) corresponds to solving an integer linear program (ILP), which is NP-hard in the worst case, though there exist relatively effective off-the-shelf solvers such as Gurobi. However, one solution is not enough. To check whether an input x is in the safe set F (2), we need to check whether all mappings M ∈C predict the same output on x; that is, xM is the same for all M ∈C. Our insight is that we can check whether x ∈F by solving just two ILPs. Recall that we want to know if the output vector xM can be different for different M ∈C. To do this, we pick a random vector v ∈Rnt, and consider the scalar projection xMv. The first ILP maximizes this scalar and the second one minimizes it. If both ILPs return the same value, then with probability 1, we can conclude that xM is the same for all mappings M ∈C and thus x ∈F. The following proposition formalizes this: Proposition 2. Let x be any input. Let v ∼ N(0, Int×nt) be a random vector. Let a = minM∈C xMv and b = maxM∈C xMv. With probability 1, a = b iff x ∈F. Proof. If x ∈F, there is only one output xM, so a = b. If x ̸∈F, there exists two M1, M2 ∈C for which xM1 ̸= xM2. Then w def = x(M1 − 954 (6,0,0) (0,6,0) (0,0,0) p1 p2 R P a 2-dimensional ball z ≤0 −z ≤0 −x ≤0 −y ≤0 x + y ≤6 Figure 4: Our goal is to find two points p1, p2 in the relative interior of a polytope P defined by inequalities shown on the right. The inequalities z ≤ 0 and −z ≤0 are always active. Therefore, P is a 2-dimensional polytope. One solution to the LP (6) is α∗= 1, p∗= (1, 1, 0), ξ∗⊤= [0, 0, 1, 1, 1], which results in p1 = (1, 1, 0) with R = 1/ √ 2. The other point p2 is chosen randomly from the ball of radius R. M2) ∈R1×nt is nonzero. The probability of wv = 0 is zero because the space orthogonal to w is a (nt−1)-dimensional space while v is drawn from a nt-dimensional space. Therefore, with probability 1, xM1v ̸= xM2v. Without loss of generality, a ≤xM1v < xM2v ≤b, so a ̸= b. 3.2 Linear programming Proposition 2 requires solving two non-trivial ILPs per input at test time. A natural step is to relax the integer constraint so that we solve two LPs instead. CLP def = {M ∈Rns×nt ≥0 | SM = T} (4) FLP def = {x : |{M(x) : M ∈CLP}| = 1}. (5) The set of consistent mappings is larger (CLP ⊇ C), so the set of safe inputs is smaller (FLP ⊆ F). Therefore, if we predict only on FLP, we still maintain 100% precision, although the recall could be lower. Now we will show how to exploit the convexity of CLP (unlike C) to avoid solving any LPs at test time at all. The basic idea is that if we choose two mappings M1, M2 ∈CLP “randomly enough”, whether xM1 = xM2 is equivalent to unanimity over CLP. We could try to sample M1, M2 uniformly from CLP, but this is costly. We instead show that “less random” choice suffices. This is formalized as follows: Proposition 3. Let X be a finite set of test inputs. Let d be the dimension of CLP. Let M1 be any mapping in CLP, and let vec(M2) be sampled from a proper density over a d-dimensional ball lying in CLP centered at vec(M1). Then, with probability 1, for all x ∈X, xM1 = xM2 implies x ∈FLP. Proof. We will prove the contrapositive. If x ̸∈ FLP, then xM is not the same for all M ∈ CLP. Without loss of generality, assume not all M ∈CLP agree on the i-th component of xM. Note that (xM)i = tr(Meix), which is the inner product of vec(M) and vec(eix). Since (xM)i is not the same for all M ∈CLP and CLP is convex, the projection of CLP onto vec(eix) must be a one-dimensional polytope. For both vec(M1) and vec(M2) to have the same projection on vec(eix), they would have to both lie in a (d −1)-dimensional polytope orthogonal to vec(eix). Since vec(M2) is sampled from a proper density over a d-dimensional ball, this has probability 0. Algorithm. We now provide an algorithm to find two points p1, p2 inside a general ddimensional polytope P = {p : Ap ≤b} satisfying the conditions of Proposition 3, where for clarity we have simplified the notation from vec(Mi) to pi and CLP to P. We first find a point p1 in the relative interior of P, which consists of points for which the fewest number of inequalities j are active (i.e., ajp = bj). We can achieve this by solving the following LP from Freund et al. (1985): max 1⊤ξ s.t. Ap + ξ ≤αb, 0 ≤ξ ≤1, α ≥1. (6) Here, ξj is a lower bound on the slack of inequality j, and α scales up the polytope so that all the ξj that can be positive are exactly 1 in the optimum solution. Importantly, if ξj = 0, constraint j is always active for all solutions p ∈P. Let (p∗, ξ∗, α∗) be an optimal solution to the LP. Then define A1 as the submatrix of A containing rows j for which ξ∗ j = 1, and A0 consist of the remaining rows for which ξ∗ j = 0. The above LP gives us p1 = p∗/α∗, which lies in the relative interior of P (see Figure 4). To obtain p2, define a radius R def = (α maxj:ξ∗ j =1 ∥aj∥2)−1. Let the columns of matrix N form an orthonormal basis of the null space of A0. Sample v from a unit d-dimensional ball centered at 0, and set p2 = p1 + RNv. To show that p2 ∈P: First, p2 satisfies the always-active constraints j, a⊤ j (p1 + RNv) = bj, 955 Algorithm 1 Our linear programming approach. procedure TRAIN Input: Training examples Output: Generic mappings (M1, M2) Define CLP as explained in (4). Compute M1 and a radius R by solving an LP (6). Sample M2 from a ball with radius R around M1. return (M1, M2) end procedure procedure TEST Input: input x, mappings (M1, M2) Output: A guaranteed correct y or “don’t know” Compute y1 = xM1 and y2 = xM2. if y1 = y2 then return y1 else return “don’t know” end if end procedure by definition of null space. For non-active j, the LP ensures that a⊤ j p1 + α−1 ≤bj, which implies a⊤ j (p1 + RNv) ≤bj. Algorithm 1 summarizes our overall procedure: At training time, we solve a single LP (6) and draw a random vector to obtain M1, M2 satisfying Proposition 3. At test time, we simply apply M1 and M2, which scales only linearly with the number of source atoms in the input. 3.3 Linear system To obtain additional intuition about the unanimity principle, let us relax CLP (4) further by removing the non-negativity constraint, which results in a linear system. Define the relaxed set of consistent mappings to be all the solutions to the linear system and the relaxed safe set accordingly: CLS def = {M ∈Rns×nt | SM = T} (7) FLS def = {x : |{M(x) : M ∈CLS}| = 1}. (8) Note that CLS is an affine subspace, so each M ∈CLS can be expressed as M0 + BA, where M0 is an arbitrary solution, B is a basis for the null space of S and A is an arbitrary matrix. Figure 5 presents the linear system for four training examples. In the rare case that S has full column rank (if we have many training examples), then the left inverse of S exists, and there is exactly one consistent mapping, the true one (M∗= S†T), but we do not require this. Let’s try to explore the linear algebraic structure in the problem. Intuitively, if we know area of Ohio maps to area(OH) and Ohio maps to OH, then we should conclude area of maps to area by subtracting the second example from the first. The following proposition formalizes and generalizes this intuition by characterizing the relaxed safe set: Proposition 4. The vector x is in row space of S iff x ∈FLS. S z }| { area of Ohio cities in Iowa " # area of Iowa 1 1 0 0 0 1 +1 cities in Ohio 0 0 1 1 1 0 +1 cities in Iowa 0 0 0 1 1 1 −1 [ ] area of Ohio 1 1 1 0 0 0 T z }| { area city OH IA " # area(IA) 1 0 0 1 +1 city(OH) 0 1 1 0 +1 city(IA) 0 1 0 1 −1 [ ] area(OH) 1 0 1 0 Figure 6: Under the linear system relaxation, we can predict the target atoms for the new input area of Ohio by adding and subtracting training examples (rows of S and T). Proof. If x is in the row space of S, we can write x as a linear combination of S for some coefficients α ∈Rn: x = α⊤S. Then for all M ∈CLS, we have SM = T, so xM = α⊤SM = α⊤T, which is the unique output3 (See Figure 6). If x ∈FLS is safe, then there exists a y such that for all M ∈ CLS, xM = y. Recall that each element of CLS can be decomposed into M0 + BA. For x(M0 + BA) to be the same for each A, x should be orthogonal to each column of B, a basis for the null space of S. This means that x is in the row space of S. Intuitively, this proposition says that stitching new inputs together by adding and subtracting existing training examples (rows of S) gives you exactly the relaxed safe set FLS. Note that relaxations increases the set of consistent mappings (CLS ⊇CLP ⊇C), which has the contravariant effect of shrinking the safe set (FLS ⊆FLP ⊆F). Therefore, using the relaxation (predicting when x ∈FLS) still preserves 100% precision. 3.4 Handling noise So far, we have assumed that our training examples are noiseless, so that we can directly add the 3There might be more than one set of coefficients (α1, α2) for writing x. However, they result to a same output: α⊤ 1 S = α⊤ 2 S =⇒α⊤ 1 SM = α⊤ 2 SM =⇒α⊤ 1 T = α⊤ 2 T. 956 S z }| { area of Ohio cities in Iowa     1 1 0 0 0 1 0 0 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 0 ×M = T z }| { area city OH IA     1 0 0 1 0 1 1 0 0 1 0 1 1 1 1 0 =⇒ M = M0 z }| { area city OH IA     area 1 0 0 0 of 0 0 0 0 Ohio 0 0 1 0 cities 0 1 0 0 in 0 0 0 0 Iowa 0 0 0 1 + B z }| {   −1 0 1 0 0 0 0 −1 0 1 0 0   × A z }| { a1,1 a1,2 a1,3 a1,4 a2,1 a2,2 a2,3 a2,4  Figure 5: Under the linear system relaxation, all solutions M to SM = T can be expressed as M = M0 + BA, where B is the basis for the null space of S and A is arbitrary. Rows s of B which are zero (Ohio and Iowa) correspond to the safe source atoms (though not the only safe inputs). constraint SM = T. Now assume that an adversary has made at most nmistakes additions to and deletions of target atoms across the examples in T, but of course we do not know which examples have been tainted. Can we still guarantee 100% precision? The answer is yes for the ILP formulation: we simply replace the exact match condition (SM = T) with a weaker one: ∥SM −T∥1 ≤nmistakes (*). The result is still an ILP, so the techniques from Section 3.1 readily apply. Note that as nmistakes increases, the set of candidate mappings grows, which means that the safe set shrinks. Unfortunately, this procedure is degenerate for linear programs. If the constraint (*) is not tight, then M+E also satisfies the constraint for any matrix E of small enough norm. This means that the consistent mappings CLP will be full-dimensional and certainly not be unanimous on any input. Another strategy is to remove examples from the dataset if they could be potentially noisy. For each training example i, we run the ILP (*) on all but the i-th example. If the i-th example is not in the resulting safe set (2), we remove it. This procedure produces a noiseless dataset, on which we can apply the noiseless linear program or linear system from the previous sections. 4 Experiments 4.1 Artificial data We generated a true mapping M∗from 50 source atoms to 20 target atoms so that each source atom maps to 0–2 target atoms. We then created 120 training examples and 50 test examples, where the length of every input is between 5 and 10. The source atoms are divided into 10 clusters, and each input only contains source atoms from one cluster. Figure 7a shows the results for F (integer linear programming), FLP (linear programming), and FLS (linear system). All methods attain 100% precision, and as expected, relaxations lead to lower recall, though they all can reach 100% recall given enough data. Comparison with point estimation. Recall that the unanimity principle ˆ M reasons over the entire set of consistent mappings, which allows us to be robust to changes in the input distribution, e.g., from training set attacks (Mei and Zhu, 2015). As an alternative, consider computing the point estimate Mp that minimizes ∥SM −T∥2 2 (the solution is given by Mp = S†T). The point estimate, by minimizing the average loss, implicitly assumes i.i.d. examples. To generate output for input x we compute y = xMp and round each coordinate yt to the closest integer. To obtain a precision-recall tradeoff, we set a threshold ϵ and if for all target atoms t, the interval [yt −ϵ, yt + ϵ) contains an integer, we set yt to that integer; otherwise we report “don’t know” for input x. To compare unanimous prediction ˆ M and point estimation Mp, for each f ∈{0.2, 0.5, 0.7}, we randomly generate 100 subsampled datasets consisting of an f fraction of the training examples. For Mp, we sweep ϵ across {0.0, 0.1, . . . , 0.5} to obtain a ROC curve. In Figure 7c(left/right), we select the distribution that results in the maximum/minimum difference between F1( ˆ M) and F1(Mp) respectively. As shown, ˆ M has always 100% precision, while Mp can obtain less 100% precision over its full ROC curve. An adversary can only hurt the recall of unanimous prediction. Noise. As stated in Section 3.4, our algorithm has the ability to guarantee 100% precision even when the adversary can modify the outputs. As we increase the number of predicate additions/deletions (nmistakes), Figure 7b shows that precision remains at 100%, while recall naturally decreases in response to being less confident about 957 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Fraction of training data Recall precision (all) recall (ILP) recall (LP) recall (LS) (a) All the relaxations reach 100% recall; relaxation results in slightly slower convergence. 0 30 60 90 120 150 0 0.2 0.4 0.6 0.8 1 nmistakes precision (ILP) recall (ILP) (b) Size of the safe set shrinks with increasing number of mistakes in the training data. 0 0.2 0.4 0.6 0.8 1 0.4 0.6 0.8 1 Recall Precision Mp(0.2) ˆ M (0.2) Mp(0.5) ˆ M (0.5) Mp(0.7) ˆ M (0.7) 0 0.2 0.4 0.6 0.8 1 0.4 0.6 0.8 1 Recall Precision Mp(0.2) ˆ M (0.2) Mp(0.5) ˆ M (0.5) Mp(0.7) ˆ M (0.7) (c) Performance of the point estimate (Mp) and unanimous prediction ( ˆ M) when the inputs are chosen adversarially for Mp (left) and for ˆ M (right). Figure 7: Our algorithm always obtains 100% precision with (a) different amounts of training examples and different relaxations, (b) existence of noise, and (c) adversarial input distributions. 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Percentage of data Recall precision (LS) recall (LS) Figure 8: We maintain 100% precision while recall increases with the number of training examples. the training outputs. 4.2 Semantic parsing on GeoQuery We now evaluate our approach on the standard GeoQuery dataset (Zelle and Mooney, 1996), which contains 880 utterances and their corresponding logical forms. The utterances are questions related to the US geography, such as: “what river runs through the most states”. We use the standard 600/280 train/test split (Zettlemoyer and Collins, 2005). After replacing entity names by their types4 based on the standard entity lexicon, there are 172 different words and 57 different predicates in this dataset. Handling context. Some words are polysemous in that they map to two predicates: in “largest river” and “largest city”, the word largest maps to longest and biggest, respectively. Therefore, instead of using words as source atoms, we 4If an entity name has more than one type we replace it by concatenating all of its possible types. use bigrams, so that each source atom always maps to the same target atoms. Reconstructing the logical form. We define target atoms to include more information than just the predicates, which enables us to reconstruct logical forms from the predicates. We use the variable-free functional logical forms (Kate et al., 2005), in which each target atom is a predicate conjoined with its argument order (e.g., loc 1 or loc 2). Table 1 shows two different choices of target atoms. At test time, we search over all possible “compatible” ways of combining target atoms into logical forms. If there is exactly one, then we return that logical form and abstain otherwise. We call a predicate combination “compatible” if it appears in the training set. We put a “null” word at the end of each sentence, and collapsed the loc and traverse predicates. To deal with noise, we minimized ∥SM −T∥1 over real-valued mappings and removed any example (row) with non-zero residual. We perform all experiments using the linear system relaxation. Training takes under 30 seconds. Figure 8 shows precision and recall as a function of the number of the training examples. We obtain 70% recall over predicates on the test examples. 84% of these have a unique compatible way of combining target atoms into a logical form, which results in a 59% recall on logical forms. Though our modeling assumptions are incorrect for real data, we were still able to get 100% precision for all training examples. Interestingly, the linear system (which allows negative mappings) helps model GeoQuery dataset better than the linear program (which has a non-negativity constraint). There exists a predicate all:e in GeoQuery that is in every sentence unless the ut958 utterances logical form (A) target atoms (A) logical form (B) target atoms (B) cities traversed by the Columbia city(x),loc(x,Columbia) city,loc,Columbia city(loc 1(Columbia)) city,loc 1,Columbia cities of Texas city(x),loc(Texas,x) city,loc,Texas city(loc 2(Texas)) city,loc 2,Texas Table 1: Two different choices of target atoms: (A) shows predicates and (B) shows predicates conjoined with their argument position. (A) is sufficient for simply recovering the predicates, whereas (B) allows for logical form reconstruction. terance contains a proper noun. With negative mappings, null maps to all:e, while each proper noun maps to its proper predicate minus all:e. There is a lot of work in semantic parsing that tackles the GeoQuery dataset (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Kwiatkowski et al., 2010; Liang et al., 2011), and the state-of-the-art is 91.1% precision and recall (Liang et al., 2011). However, none of these methods can guarantee 100% precision, and they perform more feature engineering, so these numbers are not quite comparable. In practice, one could use our unanimous prediction approach in conjunction with others: For example, one could run a classic semantic parser and simply certify 59% of the examples to be correct with our approach. In critical applications, one could use our approach as a first-pass filter, and fall back to humans for the abstentions. 5 Extensions 5.1 Learning from denotations Up until now, we have assumed that we have input-output pairs. For semantic parsing, this means annotating sentences with logical forms (e.g., area of Ohio to area(OH)) which is very expensive. This has motivated previous work to learn from question-answer pairs (e.g., area of Ohio to 44825) (Liang et al., 2011). This provides weaker supervision: For example, 44825 is the area of Ohio (in squared miles), but it is also the zip code of Chatfield. So, the true output could be either area(OH) or zipcode(Chatfield). In this section, we show how to handle this form of weak supervision by asking for unanimity over additional selection variables. Formally, we have D = {(x1, Y1), . . . , (xn, Yn)} as a set of training examples, here each Yi consists of ki candidate outputs for xi. In this case, the unknowns are the mapping M as before along with a selection vector πi, which specifies which of the ki outputs in Yi is equal to xiM. To implement the unanimity prin0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Percentage of data Recall Active learning Passive learning Figure 9: When we choose examples to be linearly independent, we only need half the number of examples to achieve the same performance. ciple, we need to consider the set of all consistent solutions (M, π). We construct an integer linear program as follows: Each training example adds a constraint that the output of it should be exactly one of its candidate output. For the i-th example, we form a matrix Ti ∈Rki×nt with all the ki candidate outputs. Formally we want xiM = πiTi. The entire ILP is: ∀i, xiM = πiTi ∀i, P j πij = 1 π, M ≥0 Given a new input x, we return the same output if xM is same for all consistent solutions (M, π). Note that we can effectively “marginalize out” π. We can also relax this ILP into an linear program following Section 3.2. 5.2 Active learning A side benefit of the linear system relaxation (Section 3.3) is that it suggests an active learning procedure. The setting is that we are given a set of inputs (the matrix S), and we want to (adaptively) choose which inputs (rows of S) to obtain the output (corresponding row of T) for. Proposition 4 states that under the linear system formulation, the set of safe inputs FLS is exactly the same as the row space of S. Therefore, if we ask for an input that is already in the row space of S, this will not affect FLS at all. The algo959 rithm is then simple: go through our training inputs x1, . . . , xn one by one and ask for the output only if it is not in the row space of the previouslyadded inputs x1, . . . , xi−1. Figure 9 shows the recall when we choose examples to be linearly independent in this way in comparison to when we choose examples randomly. The active learning scheme requires half as many labeled examples as the passive scheme to reach the same recall. In general, it takes rank(S) ≤n examples to obtain the same recall as having labeled all n examples. Of course, the precision of both systems is 100%. 5.3 Paraphrasing Another side benefit of the linear system relaxation (Section 3.3) is that we can easily partition the safe set FLS (8) into subsets of utterances which are paraphrases of each other. Two utterances are paraphrase of each other if both map to the same logical form, e.g., “Texas’s capital” and “capital of Texas”. Given a sentence x ∈FLS, our goal is to find all of its paraphrases in FLS. As explained in Section 3.3, we can represent each input x as a linear combination of S for some coefficients α ∈Rn: x = α⊤S. We want to find all x′ ∈FLS such that x′ is guaranteed to map to the same output as x. We can represent x′ = β⊤S for some coefficients β ∈Rn. The outputs for x and x′ are thus α⊤T and β⊤T, respectively. Thus we are interested in β’s such that α⊤T = β⊤T, or in other words, α −β is in the null space of T ⊤. Let B be a basis for the null space of T ⊤. We can then write α −β = Bv for some v. Therefore, the set of paraphrases of x ∈FLS are: Paraphrases(x) def = {(α −Bv)⊤S : v ∈Rn}. (9) 6 Discussion and related work Our work is motivated by the semantic parsing task (though it can be applied to any set-to-set prediction task). Over the last decade, there has been much work on semantic parsing, mostly focusing on learning from weaker supervision (Liang et al., 2011; Goldwasser et al., 2011; Artzi and Zettlemoyer, 2011; Artzi and Zettlemoyer, 2013), scaling up beyond small databases (Cai and Yates, 2013; Berant et al., 2013; Pasupat and Liang, 2015), and applying semantic parsing to other tasks (Matuszek et al., 2012; Kushman and Barzilay, 2013; Artzi and Zettlemoyer, 2013). However, only Popescu et al. (2003) focuses on precision. They also obtain 100% precision, but with a hand-crafted system, whereas we learn a semantic mapping. The idea of computing consistent hypotheses appears in the classic theory of version spaces for binary classification (Mitchell, 1977) and has been extended to more structured settings (Vanlehn and Ball, 1987; Lau et al., 2000). Our version space is used in the context of the unanimity principle, and we explore a novel linear algebraic structure. Our “safe set” of inputs appears in the literature as the complement of the disagreement region (Hanneke, 2007). They use this notion for active learning, whereas we use it to support unanimous prediction. There is classic work on learning classifiers that can abstain (Chow, 1970; Tortorella, 2000; Balsubramani, 2016). This work, however, focuses on the classification setting, whereas we considered more structured output settings (e.g., for semantic parsing). Another difference is that we operate in a more adversarial setting by leaning on the unanimity principle. Another avenue for providing user confidence is probabilistic calibration (Platt, 1999), which has been explored more recently for structured prediction (Kuleshov and Liang, 2015). However, these methods do not guarantee precision for any training set and test input. In summary, we have presented the unanimity principle for guaranteeing 100% precision. For the task of learning semantic mappings, we leveraged the linear algebraic structure in our problem to make unanimous prediction efficient. We view our work as a first step in learning reliable semantic parsers. A natural next step is to explore our framework with additional modeling improvements—especially in dealing with context, structure, and noise. Reproducibility. All code, data, and experiments for this paper are available on the CodaLab platform at https: //worksheets.codalab.org/worksheets/ 0x593676a278fc4e5abe2d8bac1e3df486/. Acknowledgments. We would like to thank the anonymous reviewers for their helpful comments. We are also grateful for a Future Of Life Research Award and NSF grant CCF-1138967, which supported this work. 960 References Y. Artzi and L. Zettlemoyer. 2011. Bootstrapping semantic parsers from conversations. In Empirical Methods in Natural Language Processing (EMNLP), pages 421–432. Y. Artzi and L. Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics (TACL), 1:49–62. A. Balsubramani. 2016. Learning to abstain from binary prediction. arXiv preprint arXiv:1602.08151. J. Berant, A. Chou, R. Frostig, and P. Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Empirical Methods in Natural Language Processing (EMNLP). Q. Cai and A. Yates. 2013. Large-scale semantic parsing via schema matching and lexicon extension. In Association for Computational Linguistics (ACL). C. K. Chow. 1970. On optimum recognition error and reject tradeoff. IEEE Transactions on Information Theory, 16(1):41–46. R. M. Freund, R. Roundy, and M. J. Todd. 1985. Identifying the set of always-active constraints in a system of linear inequalities by a single linear program. Technical report, Massachusetts Institute of Technology, Alfred P. Sloan School of Management. D. Goldwasser, R. Reichart, J. Clarke, and D. Roth. 2011. Confidence driven unsupervised semantic parsing. In Association for Computational Linguistics (ACL), pages 1486–1495. S. Hanneke. 2007. A bound on the label complexity of agnostic active learning. In International Conference on Machine Learning (ICML), pages 353–360. R. J. Kate, Y. W. Wong, and R. J. Mooney. 2005. Learning to transform natural to formal languages. In Association for the Advancement of Artificial Intelligence (AAAI), pages 1062–1068. V. Kuleshov and P. Liang. 2015. Calibrated structured prediction. In Advances in Neural Information Processing Systems (NIPS). N. Kushman and R. Barzilay. 2013. Using semantic unification to generate regular expressions from natural language. In Human Language Technology and North American Association for Computational Linguistics (HLT/NAACL), pages 826–836. T. Kwiatkowski, L. Zettlemoyer, S. Goldwater, and M. Steedman. 2010. Inducing probabilistic CCG grammars from logical form with higher-order unification. In Empirical Methods in Natural Language Processing (EMNLP), pages 1223–1233. T. A. Lau, P. Domingos, and D. S. Weld. 2000. Version space algebra and its application to programming by demonstration. In International Conference on Machine Learning (ICML), pages 527–534. P. Liang, M. I. Jordan, and D. Klein. 2011. Learning dependency-based compositional semantics. In Association for Computational Linguistics (ACL), pages 590–599. C. Matuszek, N. FitzGerald, L. Zettlemoyer, L. Bo, and D. Fox. 2012. A joint model of language and perception for grounded attribute learning. In International Conference on Machine Learning (ICML), pages 1671–1678. S. Mei and X. Zhu. 2015. Using machine teaching to identify optimal training-set attacks on machine learners. In Association for the Advancement of Artificial Intelligence (AAAI). T. M. Mitchell. 1977. Version spaces: A candidate elimination approach to rule learning. In International Joint Conference on Artificial Intelligence (IJCAI), pages 305–310. B. Nelson, M. Barreno, F. J. Chi, A. D. Joseph, B. I. Rubinstein, U. Saini, C. Sutton, J. Tygar, and K. Xia. 2009. Misleading learners: Co-opting your spam filter. In Machine learning in cyber trust, pages 17– 51. P. Pasupat and P. Liang. 2015. Compositional semantic parsing on semi-structured tables. In Association for Computational Linguistics (ACL). J. Platt. 1999. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in Large Margin Classifiers, 10(3):61–74. A. Popescu, O. Etzioni, and H. Kautz. 2003. Towards a theory of natural language interfaces to databases. In International Conference on Intelligent User Interfaces (IUI), pages 149–157. H. Shimodaira. 2000. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 90:227–244. F. Tortorella. 2000. An optimal reject rule for binary classifiers. In Advances in Pattern Recognition, pages 611–620. K. Vanlehn and W. Ball. 1987. A version space approach to learning context-free grammars. Machine learning, 2(1):39–74. Y. W. Wong and R. J. Mooney. 2007. Learning synchronous grammars for semantic parsing with lambda calculus. In Association for Computational Linguistics (ACL), pages 960–967. M. Zelle and R. J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Association for the Advancement of Artificial Intelligence (AAAI), pages 1050–1055. 961 L. S. Zettlemoyer and M. Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Uncertainty in Artificial Intelligence (UAI), pages 658– 666. 962
2016
90
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 963–973, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Exploring Convolutional and Recurrent Neural Networks in Sequential Labelling for Dialogue Topic Tracking Seokhwan Kim, Rafael E. Banchs, Haizhou Li Human Language Technology Department Institute for Infocomm Research Singapore 138632 {kims,rembanchs,hli}@i2r.a-star.edu.sg Abstract Dialogue topic tracking is a sequential labelling problem of recognizing the topic state at each time step in given dialogue sequences. This paper presents various artificial neural network models for dialogue topic tracking, including convolutional neural networks to account for semantics at each individual utterance, and recurrent neural networks to account for conversational contexts along multiple turns in the dialogue history. The experimental results demonstrate that our proposed models can significantly improve the tracking performances in human-human conversations. 1 Introduction A human conversation often involves a series of multiple topics contextually related to each other. In this scenario, every participant in the conversation is required to understand the on-going topic discussed at each moment, detect any topic shift made by others, and make a decision to selfinitiate a new topic. These human capabilities for handling topics are also expected from dialogue systems to achieve natural and human-like conversations. Many studies have been conducted on multidomain or multi-task dialogue systems by means of sentence-level topic identification as a subtask of natural language understanding (Lin et al., 1999; Nakata et al., 2002; Lagus and Kuusisto, 2002; Adams and Martell, 2008; Ikeda et al., 2008; Celikyilmaz et al., 2011). In these approaches, a given user input at each turn is categorized into topic classes, each of which triggers the corresponding sub-system specializing in the particular topic. Despite many previous efforts, the sentence categorization methods still have the following limitations. Firstly, the effectiveness of the approaches is limited only in user-initiative conversations, because the categorization is performed mainly based on the user’s input mentioned at a given turn. Secondly, no correlation between different topics is considered neither in the topic decision process nor in each topic-specific sub-system operated independently from the others. Lastly, the conversational coherence in a given dialogue history sequence has limited effects on determining the current topic. Another direction for multi-topic dialogue systems has been towards utilizing human knowledge represented in domain models (Roy and Subramaniam, 2006) and agendas (Bohus and Rudnicky, 2003; Lee et al., 2008). The knowledge-based approaches make the system capable of having more control of dialogue flows including topic sequences. This aspect contributes to better decisions of topics in system-initiative cases, but it can adversely affect the flexibility to deal with unexpected inputs against the system’s suggestions. Moreover, the high cost of building the required resources is another problem that these methods face from a practical point of view. Recently, some researchers (Morchid et al., 2014a; Morchid et al., 2014b; Esteve et al., 2015) have worked on topic identification for analyzing human-human dialogues. Although they don’t aim at building components in dialogue systems directly, the human behaviours learned from the conversations can suggest directions for further advancement of conversational agents. However, the problem defined in the studies is under the assumption that every dialogue session is assigned with just a single theme category, which means any topic shift occurred in a session is left out of consideration in the analyses. On the other hand, we previously addressed the problem of detecting multiple topic transitions 963 in mixed-initiative human-human conversations, which is called dialogue topic tracking (Kim et al., 2014a; Kim et al., 2014b). In these studies, the tracking task is formulated as a classification problem for each utterance-level, similar to the sentence categorization approaches. But the target of the classification is not just an individual topic category to which each input sentence belongs, but the decision whether a topic transition occurs at a given turn as well as what the most probable topic category will follow after the transition. This paper presents our work also on dialogue topic tracking mainly focusing on the following issues. Firstly, in addition to transitions between dialogue segments from different topics, transitions between segments belonging to the same topic are also detected. This focuses the task more on detailed aspects of topic handling that are relevant to other subtasks such as natural language understanding and dialogue state tracking, rather than the conventional tracking of changes in topic categories only. Another contribution of this work is that we introduce a way to use convolutional neural networks in topic tracking to improve the classification performances with the learned convolutional features. In addition, we also propose the architectures based on recurrent neural networks to incorporate the temporal coherence that has not played an important role in previous approaches. The remainder of this paper is structured as follows. We present a problem definition of dialog topic tracking in Section 2. We describe our proposed approaches to this task using convolutional and recurrent neural networks in Section 3. We report the evaluation result of the methods in Section 4 and conclude this paper in Section 5. 2 Dialogue Topic Tracking Dialogue topic tracking is defined as a multi-class classification problem to categorize the topic state at each time step into the labels encoded in BIO tagging scheme (Ramshaw and Marcus, 1995) as follows: f(t) =            B-{c ∈C} if ut is at the beginning of a segment belongs to c, I-{c ∈C} else if ut is inside a segment belongs to c, O otherwise, where ut is the t-th utterance in a given dialogue session and C is a closed set of topic categories. t Speaker Utterance (ut) f(t) 1 Guide How can I help you? B-OPEN 2 Tourist Can you recommend some good places to visit in Singapore? B-ATTR 3 Guide Well if you like to visit an icon of Singapore, Merlion will be a nice place to visit. I-ATTR 4 Tourist Okay. But I’m particularly interested in amusement parks. B-ATTR 5 Guide Then, what about Universal Studio? I-ATTR 6 Tourist Good! How can I get there from Orchard Road by public transportation? B-TRSP 7 Guide You can take the red line train from Orchard and transfer to the purple line at Dhoby Ghaut. Then, you could reach HarbourFront where Sentosa Express departs. I-TRSP 8 Tourist How long does it take in total? I-TRSP 9 Guide It’ll take around half an hour. I-TRSP 10 Tourist Alright. I-TRSP 11 Guide Or, you can use the shuttle bus service from the hotels in Orchard, which is free of charge. B-TRSP 12 Tourist Great! That would be definitely better. I-TRSP 13 Guide After visiting the park, you can enjoy some seafoods at the riverside on the way back. B-FOOD 14 Tourist What food do you have any recommendations to try there? I-FOOD 15 Guide If you like spicy foods, you must try chilli crab which is one of our favourite dishes. I-FOOD 16 Tourist Great! I’ll try that. I-FOOD Figure 1: Examples of dialogue topic tracking on a tour guide dialogue labelled with BIO tags. ATTR, TRSP and FOOD denotes the topic categories of attraction, transportation, and food, respectively. Figure 1 shows an example of topic tracking on a dialogue fragment between a tour guide and a tourist. Since each tag starting with ‘B’ should occur at the beginning of a new segment after a topic transition from its previous one, the label sequence indicates that this conversation is divided into six segments at t = {2, 4, 6, 11, 13}. The initiativity of each segment can be also found from who the speaker of the first utterance of the segment is. In this example, three of the cases are initiated by the tourist at t = {2, 4, 6}, but the others are leaded by the tour guide, which means it is a mixed-initiative type of conversation. Different from the former studies (Kim et al., 2014a; Kim et al., 2014b) that were only focused on detecting transitions between different topic categories, this work subdivides each dialogue sequence which belongs to a single topic category, but discusses more than one subject that can be more specifically differentiated from each other. The above example also has two cases of transitions with no change of topic categories at t = {4, 11}: the first one is due to the tourist’s request for an alternative attraction from the recommendation in the previous segment, and the other transition is triggered by the tour guide to suggest another option of transportation which is also available for the route discussed previously. 964 ut-1 ut ut-2 ut-h+1 … Input utterances within window size h Embedding layer with three different channels for current, previous, and history utterances Convolutional layer with multiple kernel sizes Max pooling layer Dense layer w softmax output Figure 2: Convolutional neural network architecture for dialogue topic tracking. 3 Models The classifier f can be built with supervised machine learning techniques, when a set of example dialogues manually annotated with gold standard labels are available as a training set. The earlier studies (Kim et al., 2014a; Kim et al., 2014b) also proposed supervised classification approaches particularly focusing on kernel methods to incorporate domain knowledge obtained from external resources into the linear vector space models based on bag-of-words features extracted from the training dialogues. This work, on the other hand, aims at improving the classification capabilities only with the internal contents in given dialogues rather than making better uses of external knowledge. To overcome the limitations of the simple vector space models used in the previous work, we propose models based on convolutional and recurrent neural network architectures. These models are presented in the remainder of this section. 3.1 Convolutional Neural Networks A convolutional neural network (CNN) automatically learns the filters in its convolutional layers which are applied to extract local features from inputs. Then, these lower-level features are combined into higher-level representations following a given network architecture. These aspects of CNNs make themselves a good fit to solve the problems which are invariant to the location where each feature is extracted on its input space and also depend on the compositional relationships between local and global features, which is the reason why CNNs have succeed in computer vision (LeCun et al., 1998). As implied by the successes of bag-of-words or bag-of-ngrams considering the existence of each linguistic unit independently and the important roles of compositional structures in linguistics, CNN models have recently achieved significant improvements also in some natural language processing tasks (Collobert et al., 2011; Shen et al., 2014; Yih et al., 2014; Kalchbrenner et al., 2014a; Kim, 2014). The model for dialogue topic tracking (Figure 2) is basically based on the CNN architecture proposed by Collobert et al. (2011) and Kim (2014) for sentence classification tasks. In the architecture, a sentence of length n is represented as a matrix with the size of n × k concatenated with n rows each of which is the kdimensional word vector ⃗xi ∈Rk representing the i-th word in the sentence. This embedding layer can be learned from scratch with random initialization or fine-tuned from pre-trained word vectors (Mikolov et al., 2013) with back propagation during training the network. Unlike other sentence classification tasks, dialogue topic tracking should consider not only a single sentence given at each time step, but also the other utterances previously mentioned. To incorporate the dependencies to the dialogue history into the topic tracking model, the input at the time step t is composed of three different channels each of which represents the current utterance ut, the previous utterance ut−1, and the other utterances ut−h+1:t−2 within h time steps, respectively, where ut is the t-th utterance in a session, ui:j is the concatenation of the utterances occurred from the i-th to the j-th time steps in the history, and h is the size of history window. The height of the n × k matrices of the first two channels for 965 the current and previous utterances is fixed to the length of the longest utterance in the whole training dataset, and then all the remaining rows after the end of each utterance are zero-padded to make all inputs same size. Since the other channel is made up by concatenating the utterances from the (t −h + 1)-th to the (t −2)-th time steps, it has a matrix with the dimension of ((h −2) · n) × k where all the gaps between contiguous utterances in the matrix are filled with zero. In the convolutional layer, each filter F ∈Rkm which has the same width k as the input matrix and a given window size m as its height slides over from the first row to the (n −m + 1)-th row of the input matrix. At the i-th position, the filter is applied to generate a feature ci = g (F · ⃗xi:i+m−1 + b), where ⃗xi:j is the subregion from the i-th row to the j-th row in the input, b ∈R is a bias term, and g is a non-linear activation function such as rectified linear units. This series of convolution operations produces a feature map ⃗c = [c1 · · · cn−m+1] ∈Rn−m+1 for the filter F. Then, the maximum value c′ = max(⃗c) is selected from each feature map considered as the most important feature for the particular filter in the max-pooilng layer. Every filter is shared across all the three different channels, but both the convolution and maxpooling operations are performed individually for each channel. Thus, the total number of feature values generated in the pooling layer is three times the number of filters. Finally, these values are forwarded to the fully-connected layer with softmax which generates the probabilistic distribution over the topic labels for a given input. 3.2 Recurrent Neural Networks Dialogue topic tracking is conceptually performed on a sequence of interactions exchanged by the participants in a given session from its beginning to each turn. Thus, the contents discussed previously in the dialogue history are likely to have an important influence on tracking the current topic at a given turn, which is a fundamental difference from other text categorization problems that consider each input independently from all others. To make use of the sequential dependencies in dialogue topic tracking, we propose the models based on recurrent neural networks (RNN) which learn the temporal dynamics by recurrent computations applied to every time step in a given inut-h+1 … ut-2 ut-1 ut Inputs Utterance-level embedding layer sf t-h+1 sf t-2 sf t-1 sf t Forward layer sb t-h+1 sb t-2 sb t-1 sb t Backward layer yt-h+1 … yt-2 yt-1 yt Output labels Figure 3: Recurrent neural network architecture for dialogue topic tracking. The backward layer with the dotted lines is enabled only with its bidirectional extension. put sequence. In a traditional RNN, hidden states connecting between input sequences and output labels are repeatedly updated with the operation ⃗st = g(Uxt + Wst−1), where xt is the t-th element in a given input sequence, ⃗st ∈R|s| is the hidden state at t with |s| hidden units, and g is a non-linear activation function. The parameters U and W are shared all the time steps. RNNs have been successfully applied to several natural language processing tasks including language modeling (Mikolov et al., 2010), text generation (Sutskever et al., 2011), and machine translation (Auli et al., 2013; Liu et al., 2014), all of which focus on dealing with variable length word sequences. On the other hand, an input sequence to be handled in dialogue topic tracking is composed of utterance-level units instead of words. In our model (Figure 3), each utterance is represented by the k-dimensional vector ⃗ut ∈Rk assigned with pre-trained sentence-level embeddings (Le and Mikolov, 2014). And then, a sequence of the utterance vectors within h time steps are connected in the recurrent layers. The default sequence of applying the recurrent operation is the ascending order from the former to the recent utterances, which is performed in the forward layer. But the opposite direction can be also considered in the backward layer which is stacked on top of the forward layer to build a bidirectional RNN (Schuster and Paliwal, 1997) which outputs the concatenation of both forward and backward states as an outcome of the recurrent operations. Then, these hidden states from the recurrent layers are passed to the fully-connected softmax layer to generate the output distributions for every time step in the sequence. 966 … Inputs … ut-1 ut ut-2 ut-h+1 Convolutional layer Forward layer sf t-1 sf t sf t-2 sf t-h+1 Backward layer sb t-1 sb t sb t-2 sb t-h+1 Output labels yt-1 yt yt-2 yt-h+1 Max pooling layer Figure 4: Recurrent convolutional network architecture for dialogue topic tracking. The backward layer is only for the bi-directional mode. The output from the model at a given time step t is a label sequence [yt−h+1, · · · , yt] for the recent h utterances. Since the labels for the earlier utterances should have been already decided at the corresponding turns, only yt is taken as the final outcome for the current time step. The hypothesis to be examined with this model is whether the other h −1 predictions that are not directly reflected to the results could help to improve the tracking performances by being considered together in the process of determining the current topic status. 3.3 Recurrent Convolutional Networks The last approach proposed in this work aims at combining the two models described in the previous sections. In this model (Figure 4), each feature vector generated through the embedding, convolutional, and max pooling layers in the CNN network (Section 3.1) is connected to the recurrent layers in the RNN model (Section 3.2). This combination is expected to play a significant role in overcoming the limitations of the sentence-level embedding considered as a feature representation in the RNN model. While the previous approach depends only on a pre-trained and non-tunable embedding model, all the parameters in the combined network can be fine-tuned with back propagation by considering the convolutional features extracted at each time step and also the temporal dependencies occurred through multiple time steps in given dialogue sequences. In computer vision, this kind of models connecting RNNs on top of CNNs is called recurrent convolutional neural networks (RCNN), which have been mostly used for exploring the dependencies between local convolutional features within a single image (Pinheiro and Collobert, 2014; Liang and Hu, 2015). Recently, they are also applied in video processing (Donahue et al., 2015) where visual features are extracted from the image at each frame using CNNs and the temporal aspects are learned with RNNs from the frame sequence of an input video. Our proposed model for dialogue topic tracking was originally motivated by this success of RCNNs particularly in video recognition considering that video and dialogue are analogous from the structural point of view. Each instance of a video and a dialogue consists of a temporal sequence of static units. 4 Evaluation 4.1 Data To demonstrate the effectiveness of our proposed models, we performed experiments on TourSG corpus released for the fourth dialogue state tracking challenge (DSTC4) (Kim et al., 2016). The dataset consists of 35 dialogue sessions collected from human-human conversations about tourism in Singapore between tour guides and tourists. All the dialogues have been manually transcribed and annotated with the labels for the challenge tasks. For the multi-topic dialogue state tracking which is the main task of the challenge, each dialogue session is divided into sub-dialogues and each segment is assigned with its topic category. Since the task particularly focuses on filling out the topicspecific frame structure with the detailed information representing the dialogue states of a given segment, it has been performed under the assumption that the manual annotations for both segmentations and topic categories are provided as parts of every input. But, in this work for dialogue topic tracking, these labels are considered as the targets to be generated automatically by the models. Every segment in the dataset belongs to one of eight topic categories. Following the nature of the tourism domain, the ‘attraction’ category accounts for the highest portion at 40.12% of the segments, which is followed by ‘transportation’, ‘food’, ‘accommodation’, ‘shopping’, ‘closing’ and ‘opening’ in order according to decreasing frequencies. The other 10.53% considered as beyond the scope of the task are annotated with ’other’. Figure 5 shows the distributions of the segments by not only the topic categories, but also the transition types from two different points of views: the first one is which speaker initiates each segment, and the other is whether the segmentation causes 967 0 500 1000 1500 2000 Number of segments SHOP ACCO FOOD TRSP ATTR Topic categories Guide-initiative/Intra-categorical Tourist-initiative/Intra-categorical Guide-initiative/Inter-categorical Tourist-initiative/Inter-categorical Figure 5: Distributions of the segments in TourSG corpus by topic categories and transition types. ATTR, TRSP, FOOD, ACCO and SHOP denotes the topic categories of attraction, transportation, food, accommodation, and shopping, respectively. Table 1: Statistics of TourSG corpus. The whole dataset is divided into three subsets for training, development, and test purposes. Set # sessions # segments # utterances Train 14 2,104 12,759 Dev 6 700 4,812 Test 15 2,210 13,463 Total 35 5,014 31,034 a topic category shift or not. The most frequent type found in the dataset is guide-initiative and intra-categorical transitions. 63.86% and 61.31% of the total segments are initiated by guides and segmented keeping topic categories, respectively. For our experiments, all these segment-level annotations were converted into utterance-level BIO tags each of which belongs to one of 15 classes: ({B-, I-} × {c : c ∈C; and c ̸= ‘other’}) ∪{O}, where C consists of all the eight topic categories. The partition of the dataset (Table 1) have been kept the same as the one used for the state tracking task in DSTC4. 4.2 Models Based on the dataset, we built 16 different models classified into the following five model families. 4.2.1 Baseline 1: Support Vector Machines The first baseline uses support vector machine (SVM) (Cortes and Vapnik, 1995) models trained with the following features: • BoNt: bag of uni/bi/tri-grams in ut weighted by tf-idf which is the product of term frequency in ut and inverse document frequency across all the training utterances. • BoNt−1: bag of n-grams computed in the same way as BoNt for the previous utterance. • BoNhistory = Ph j=0 λj · BoNj  : weighted sum of n-gram vectors in the recent h = 10 utterances with a decay factor λ = 0.9. • SPKt, SPKt−1: speakers of the current and the previous utterances. • SPK{t−1,t}: bi-gram of SPKt and SPKt−1. Another variation replaces the bag of n-grams with the utterance-level neural embeddings inferred by the pre-trained 300 dimensional doc2vec (Le and Mikolov, 2014) model on 2.9M sentences with 37M words in 553k Singapore-related posts collected from travel forums. Then, the third model takes the concatenation of both bag of n-grams and doc2vec features. All three baselines were implemented based on the one-against-all approach with the same number of binary classifiers as the total number of classes for multi-label classification. SVMlight (Joachims, 1999) was used for building each binary classfier with the linear kernel. 4.2.2 Baseline 2: Conditional Random Fields To incorporate the temporal aspects also into the linear models, conditional random fields (CRFs) (Lafferty et al., 2001) which have been successfully applied for other sequential labelling problems were used for the second set of baselines. Similar to our proposed RNN architecture (Section 3.2), the recent utterances occurred within the window size of h = 10 composed the first-order linear-chain CRFs. Three CRF models were built using CRFsuite (Okazaki, 2007) with the same feature sets as in the SVM models. 4.2.3 CNN-based models For the CNN architecture (Section 3.1), we compared two different models: the first one learned the word embeddings from scratch with random parameters, while the other was initialized with word2vec (Mikolov et al., 2013) trained on 968 the same dataset for the doc2vec model in Section 4.2.1. Both approaches generated a dense vector with a dimension of k = 300 for each word in utterances. Then, the embedded vectors were concatenated into three matrices representing the current, previous, and history utterances, respectively. While the first two channels for a single utterance, ut or ut−1, had a size of 65 × 300 according to the maximum number of words n = 65 in the training utterances, the number of rows in the other matrix was 520 which is eight times as large as the others to represent the history utterances from ut−9 to ut−2 where h = 10. In the convolutional layer, 100 feature maps were learned for each of three different filter sizes m = {3, 4, 5} by sliding them over the utterances, which produced 900 feature values in total after the max-pooling operations for all three channels. In addition to these learned features, SPKt and SPKt−1 values introduced in Section 4.2.1 were appended to each feature vector to take the speaker information into account as in the baselines. Before the fully-connected layer, dropout was performed with the rate of 0.25 for regularization. And then, training was done with stochastic gradient descent (SGD) by minimizing categorical cross entropy loss on the training set. All the neural network-based models in this work were implemented using Theano (Bergstra et al., 2010) with the parameters obtained from the grid search on the development set. 4.2.4 RNN and RCNN-based models Each proposed recurrent network (Section 3.2 and 3.3) was implemented with four variations categorized by whether the backward layer is included in each model or not and also which architecture is used in the recurrent layers between traditional RNNs and long short-term memories (LSTMs) (Hochreiter and Schmidhuber, 1997). The RCNN models based on LSTMs are particularly called long-term recurrent convolutional networks (LRCN) (Donahue et al., 2015). All the RCNN-based models were initialized with the pretrained word2vec model in the training phase. The dimension of the hidden layers of the recurrent cells was chosen to be |s| = 500 based on the development set. And the other settings including the parameters, the training algorithm, and the loss fuction were the same as in Section 4.2.3. 0 50 100 150 200 250 300 Number of epochs 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 F-score initilized with w2v learned from scratch Figure 6: Comparisons of the topic tracking performances of the CNN models with different word embedding approaches according to the number of epochs for training in the development phase. 4.3 Results Table 2 compares the performances of the models trained on the combination of the training and development sets and evaluated on the test set. The parameters for each model were decided in the development phase which built the models under various different settings only on the training set and validated them with the development set. The evaluations were performed with precision, recall, and F-measure to the manual annotations under three different schedules at tourist turns, guide turns, and all the turns. Then, the statistical significance for every pair was computed using approximate randomization (Yeh, 2000). Comparing between two baseline families, the sequential extensions with the CRF models contributed to significant improvements (p < 0.05) from the SVM models in all the schedules. But in both SVM and CRF models, doc2vec features failed to achieve comparable performances to the simplest bag-of-ngrams features. Even the improvements by combining them to the word features were not statistically significant. While these sentence-level embeddings trained in the unsupervised manner exposed the limitations in dialogue topic tracking performances, our proposed CNN-based models outperformed all these baselines. Especially, the CNN initialized with the pre-trained word2vec model achieved higher performances by 8.38%, 6.41%, and 7.21% in F-measure under each schedule, respectively, than the best baseline results. 969 Schedule: Tourist Turns Schedule: Guide Turns Schedule: All Models P R F P R F P R F SVM (BoN+SPK) 61.60 62.18 61.89 58.65 58.42 58.53 59.85 59.94 59.90 SVM (D2V+SPK) 45.05 51.32 47.98 47.78 52.98 50.24 46.66 52.31 49.32 SVM (BoN+SPK+D2V) 61.60 62.18 61.89 58.74 58.53 58.63 59.91 60.01 59.96 CRF (BoN+SPK) 61.18 62.72 61.94 59.27 59.78 59.52 60.05 60.97 60.51 CRF (D2V+SPK) 61.53 49.42 54.81 61.94 49.68 55.13 61.77 49.57 55.00 CRF (BoN+SPK+D2V) 61.22 62.76 61.98 59.30 59.81 59.55 60.08 61.00 60.54 CNN (from scratch) 64.74 63.46 64.10 63.29 62.48 62.88 63.88 62.87 63.37 CNN (with W2V) 69.26 71.49 70.36 65.29 66.65 65.96 66.91 68.61 67.75 Uni-directional RNN 49.46 54.34 51.79 49.54 53.36 51.38 49.51 53.75 51.55 Bi-directional RNN 48.54 49.96 49.24 48.86 49.72 49.29 48.73 49.82 49.27 Uni-directional LSTM 49.52 50.81 50.15 49.41 49.85 49.63 49.45 50.23 49.84 Bi-directional LSTM 48.39 49.05 48.72 48.44 48.58 48.51 48.42 48.77 48.59 Uni-directional RCNN 69.49 71.59 70.52 65.43 66.68 66.05 67.08 68.67 67.86 Bi-directional RCNN 69.81 72.50 71.13 65.49 67.28 66.37 67.25 69.39 68.30 Uni-directional LRCN 69.37 71.45 70.40 66.22 67.41 66.81 67.50 69.04 68.26 Bi-directional LRCN 69.85 72.56 71.18 66.04 67.62 66.82 67.60 69.62 68.59 Table 2: Comparisons of the topic tracking performances with different models. D2V and W2V denote the vectors from doc2vec and word2vec, respectively. Figure 6 presents the differences between two CNN models observed in the development phase. As the number of epochs increases, the performances of both models also increase up to certain points of saturation. But the model with random initialization required much longer time to be ready to gain scores in earlier iterations and its saturated performance was also lower than the other one learned on top of word2vec. In contrast to the success of the CNN models, the proposed RNN architectures were not able to produce quality results, which was also caused by the limitations of doc2vec representations as already shown in the baseline results. Although some RNN models showed little performance gains over the SVM baselines only with doc2vec features, they were even worse than the CRF model with the same features. On the other hand, the RCNN models connecting the results of CNNs to the RNNs contributed to performance improvements not only from the baselines, but also from the CNN models. While the uni-directional RNN was preferred in the RNN models only with doc2vec, the bi-directional LSTM showed better results in the RCNN architectures. As a result, the bidirectional LRCN model achieved the best performances against all the others, which were statistically significant (p < 0.01) compared to the second best results with bi-directional RCNN. Table 3 shows the segmentation performances evaluated by considering only the beginning of each segment predicted by the best model of each architecture family. The proposed CNN and LRCN models demonstrated better capabilities of detecting topic transitions in both intra-categorical and inter-categorical conditions than the baselines. While the CNN model tended to have a higher coverage in segmentation than the others, the LRCN model produced more precise decisions to recognize the boundaries on the strength of the consideration of conversational coherences in dialogue history sequences. However, the segmentation performances even with the best models were still very limited especially for inter-categorical transitions. And most of the models in the experiment had better performances in tourist turns than guide turns, as shown in Table 2. Considering the general characteristics of the target domain conversations that guide-driven and inter-categorical transitions are more likely to be dependent on human background knowledge than tourist-driven and intracategorical cases, respectively, the current limitations are expected to be tackled by leveraging external resources into the models in future. Finally, the generated errors from the models were categorized into the following error types: 970 Intra-categorical Inter-categorical All Models P R F P R F P R F SVM (BoN+SPK+D2V) 40.22 30.19 34.49 8.68 28.14 13.26 18.65 29.51 22.85 CRF (BoN+SPK+D2V) 36.42 25.92 30.28 11.57 24.40 15.70 21.58 25.41 23.34 CNN (with W2V) 41.25 41.50 41.37 17.02 40.87 24.03 28.06 41.29 33.41 Bi-directional LRCN 44.82 38.28 41.29 17.87 40.72 24.84 29.41 39.09 33.57 Table 3: Comparisons of the segmentation performances with different models. SVM CRF CNN LRCN 0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 5500 6000 6500 7000 7500 Number of errors missing extraneous wrong category wrong boundary Figure 7: Distributions of errors generated from the best model of each architecture. • Missing predictions: when the reference belongs to one of the labels other than ‘O’, but the model predicts it as ‘O’. • Extraneous labelling: when the reference belongs to ‘O’, but the model predicts it as another label. • Wrong categorizations: when the reference belongs to a category other than ‘O’, but the model predicts it as another wrong category. • Wrong boundary detections: when the model outputs the correct category, but with a wrong prediction from ‘B’ to ‘I’ or from ‘I’ to ‘B’. The error distributions in Figure 7 indicate that the significantly decreased numbers of wrong categories were the decisive factor in performance improvements by our proposed approaches from the baselines. Besides, the enhanced capabilities of the models in distinguishing between ‘O’ and other labels were demonstrated by the reduced numbers of missing and extraneous predictions. The sequential architectures in CRF and LRCN models also showed its effectiveness especially in boundary detection, as expected. 5 Conclusions This paper presented various neural network architectures for dialogue topic tracking. Convolutional neural networks were proposed to capture the semantic aspects of utterances given at each moment, while recurrent neural networks were intended to incorporate temporal aspects in dialogue histories into tracking models. Experimental results showed that the proposed approaches helped to improve the topic tracking performance with respect to the linear baseline models. Furthering this work, there would be still much room for improvement in future. Firstly, the architectures based on a single convolutional layer and a single bi-directional recurrent layer in the proposed models can be extended by adding more layers as well as utilizing more advanced components including hierarchical CNNs (Kalchbrenner et al., 2014b) to deal with utterance compositionalities or attention mechanisms (Denil et al., 2012) to focus on more important segments in dialogue sequences. Secondly, the use of external knowledge could be a key to success in dialogue topic tracking, as proved in the previous studies. However, this work only takes internal dialogue information into account for making decisions. If we develop a good way of leveraging other useful resources into the neural network architectures, better performance can be expected especially for guide-driven and inter-categorical topic transitions that are considered to be more dependent on background knowledge of the speakers. The other direction of our future work is to investigate joint models for tracking dialogue topics and states simultaneously. Although the previous multi-topic state tracking task has assumed that the topics should be given as inputs to state trackers, we expect that a joint approach can contribute to both problems by dealing with the bi-directional relationships between them. 971 References P. H. Adams and C. H. Martell. 2008. Topic detection and extraction in chat. In Proceedings of the 2008 IEEE International Conference on Semantic Computing, pages 581–588. M. Auli, M. Galley, C. Quirk, and G. Zweig. 2013. Joint language and translation modeling with recurrent neural networks. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1044–1054. J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. Warde-Farley, and Y. Bengio. 2010. Theano: A cpu and gpu math compiler in python. In Proc. 9th Python in Science Conf, pages 1–7. D. Bohus and A. Rudnicky. 2003. Ravenclaw: dialog management using hierarchical task decomposition and an expectation agenda. In Proceedings of the European Conference on Speech, Communication and Technology, pages 597–600. A. Celikyilmaz, D. Hakkani-T¨ur, and G. T¨ur. 2011. Approximate inference for domain detection in spoken language understanding. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (INTERSPEECH), pages 713–716. R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493–2537. C. Cortes and V. Vapnik. 1995. Support-vector networks. Machine learning, 20(3):273–297. M. Denil, L. Bazzani, H. Larochelle, and N. de Freitas. 2012. Learning where to attend with deep architectures for image tracking. Neural computation, 24(8):2151–2184. J. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. 2015. Long-term recurrent convolutional networks for visual recognition and description. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2625–2634. Y. Esteve, M. Bouallegue, C. Lailler, M. Morchid, R. Dufour, G. Linares, D. Matrouf, and R. De Mori. 2015. Integration of word and semantic features for theme identification in telephone conversations. In Natural Language Dialog Systems and Intelligent Assistants, pages 223–231. Springer. S. Hochreiter and J. Schmidhuber. 1997. Long shortterm memory. Neural computation, 9(8):1735– 1780. S. Ikeda, K. Komatani, T. Ogata, H. G. Okuno, and H. G. Okuno. 2008. Extensibility verification of robust domain selection against out-of-grammar utterances in multi-domain spoken dialogue system. In Proceedings of the 9th INTERSPEECH, pages 487– 490. T. Joachims. 1999. Making large-scale SVM learning practical. In B. Sch¨olkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods Support Vector Learning, chapter 11, pages 169– 184. MIT Press, Cambridge, MA. N. Kalchbrenner, E. Grefenstette, and P. Blunsom. 2014a. A convolutional neural network for modelling sentences. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 655–665. N. Kalchbrenner, E. Grefenstette, and P. Blunsom. 2014b. A convolutional neural network for modelling sentences. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 655–665. S. Kim, R. E. Banchs, and H. Li. 2014a. A composite kernel approach for dialog topic tracking with structured domain knowledge from wikipedia. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 19–23. S. Kim, R. E. Banchs, and H. Li. 2014b. Wikipediabased kernels for dialogue topic tracking. In Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pages 131–135. S. Kim, L. F. D’Haro, R. E. Banchs, J. D. Williams, and M. Henderson. 2016. The fourth dialog state tracking challenge. In Proceedings of the 7th International Workshop on Spoken Dialogue Systems (IWSDS). Y. Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746–1751. J. Lafferty, A. McCallum, and F.C.N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of ICML, pages 282–289. K. Lagus and J. Kuusisto. 2002. Topic identification in natural language dialogues using neural networks. In Proceedings of the 3rd SIGdial workshop on Discourse and dialogue, pages 95–102. Q. Le and T. Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1188–1196. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324. C. Lee, S. Jung, and G. G. Lee. 2008. Robust dialog management with n-best hypotheses using dialog examples and agenda. In Proceedings of the 972 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 630–637. M. Liang and X. Hu. 2015. Recurrent convolutional neural network for object recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3367–3375. B. Lin, H. Wang, and L. Lee. 1999. A distributed architecture for cooperative spoken dialogue agents with coherent dialogue state and history. In Proceedings of the IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). S. Liu, N. Yang, M. Li, and M. Zhou. 2014. A recursive recurrent neural network for statistical machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL), pages 1491–1500. T. Mikolov, M. Karafi´at, L. Burget, J. Cernock`y, and S. Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH, volume 2, page 3. T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems (NIPS), pages 3111–3119. M. Morchid, R. Dufour, M. Bouallegue, G. Linares, and R. De Mori. 2014a. Theme identification in human-human conversations with features from specific speaker type hidden spaces. In INTERSPEECH, pages 248–252. M. Morchid, R. Dufour, P.M. Bousquet, M. Bouallegue, G. Linares, and R. De Mori. 2014b. Improving dialogue classification using a topic space representation and a gaussian classifier based on the decision rule. In Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pages 126–130. IEEE. T. Nakata, S. Ando, and A. Okumura. 2002. Topic detection based on dialogue history. In Proceedings of the 19th international conference on Computational linguistics (COLING), pages 1–7. N. Okazaki. 2007. Crfsuite: a fast implementation of conditional random fields (crfs). P. Pinheiro and R. Collobert. 2014. Recurrent convolutional neural networks for scene labeling. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 82–90. L. A. Ramshaw and M. P. Marcus. 1995. Text chunking using transformation-based learning. In Proceedings of the 3rd Workshop on Very Large Corpus, pages 88–94. S. Roy and L. V. Subramaniam. 2006. Automatic generation of domain models for call centers from noisy transcriptions. In Proceedings of COLING/ACL, pages 737–744. M. Schuster and K. K. Paliwal. 1997. Bidirectional recurrent neural networks. Signal Processing, IEEE Transactions on, 45(11):2673–2681. Y. Shen, X. He, J. Gao, L. Deng, and G. Mesnil. 2014. Learning semantic representations using convolutional neural networks for web search. In Proceedings of the 23rd International Conference on World Wide Web (WWW), pages 373–374. International World Wide Web Conferences Steering Committee. I. Sutskever, J. Martens, and G. E. Hinton. 2011. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 1017–1024. A. Yeh. 2000. More accurate tests for the statistical significance of result differences. In Proceedings of the 18th conference on Computational linguisticsVolume 2, pages 947–953. W. Yih, X. He, and C. Meek. 2014. Semantic parsing for single-relation question answering. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL), pages 643–648. 973
2016
91
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 974–983, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Cross-Lingual Lexico-Semantic Transfer in Language Learning Ekaterina Kochmar The ALTA Institute University of Cambridge [email protected] Ekaterina Shutova Computer Laboratory University of Cambridge [email protected] Abstract Lexico-semantic knowledge of our native language provides an initial foundation for second language learning. In this paper, we investigate whether and to what extent the lexico-semantic models of the native language (L1) are transferred to the second language (L2). Specifically, we focus on the problem of lexical choice and investigate it in the context of three typologically diverse languages: Russian, Spanish and English. We show that a statistical semantic model learned from L1 data improves automatic error detection in L2 for the speakers of the respective L1. Finally, we investigate whether the semantic model learned from a particular L1 is portable to other, typologically related languages. 1 Introduction Lexico-semantic knowledge of our native language is one of the factors that underlie our ability to communicate and reason about the world. It is also the knowledge that guides us in the process of second language learning. Lexico-semantic variation across languages (Bach and Chao, 2008) makes lexical choice a challenging task for second language learners (Odlin, 1989). For instance, the meaning of the English expression pull the trigger is realised as *push the trigger in Russian and Spanish, possibly leading to errors of lexical choice by Russian and Spanish speakers learning English. Our native language (L1) plays an essential role in the process of lexical choice. When choosing between several linguistic realisations in L2, non-native speakers may rely on the lexicosemantic information from L1 and select a translational equivalent that they deem to match their communicative intent best. For example, Russian speakers *do exceptions and offers instead of making them, and *find decisions instead of finding solutions, since in Russian do and make have a single translational equivalent (delat’), and so do decision and solution (resheniye). As a result, nonnative speakers who tend to fall back to their L1 translate phrases word-for-word, violating English lexico-semantic conventions. The effect of L1 interference on lexical choice in L2 has been pointed out in a number of studies (Chang et al., 2008; Rozovskaya, 2010; Rozovskaya, 2011; Dahlmeier and Ng, 2011). Some of these studies also demonstrated that using L1specific properties, such as the error patterns of speakers of a given L1 or L1-induced paraphrases, improves the performance of automatic error correction in non-native writing. However, neither of the approaches has constructed a semantic model from L1 data and systematically studied the effects of its transfer onto L2. In addition, most previous work has focused on error correction, bypassing the task of error detection for lexical choice. Lexical choice is one of the most challenging tasks for both non-native speakers and automated error detection and correction (EDC) systems. The results of the most recent shared task on EDC, which spanned all error types including lexical choice, show that most teams either did not propose any algorithms for this type of errors or did not perform well on them (Ng, 2014). In this paper, we experimentally investigate the influence of L1 on lexical choice in L2 and whether lexico-semantic models from L1 are transferred to L2 during language learning. For this purpose, we induce L1 and L2 semantic models from corpus statistics in each language independently, and then use the discrepancies between the two models to identify errors of lexical choice. We focus on two types of verb–noun combinations, VERB–DIRECT OBJECT (dobj) and 974 SUBJECT–VERB (subj), and consider two widely spoken L1s from different language families – Russian and Spanish. We conduct our experiments using the Cambridge Learner Corpus (Nicholls, 2003), containing writing samples of non-native speakers of English. Spanish speakers account for around 24.6% of the non-native speakers represented in this corpus and Russian speakers for 4%. Our experiments test two hypotheses: (1) that L1 effects in the lexical choice in L2 reveal themselves in the difference of the word association strength in the L1 and L2; and (2) that L1 lexicosemantic models are portable to other, typologically related languages. To the best of our knowledge, our paper is the first one to experimentally investigate these questions. Our results demonstrate that L1-induced information improves automatic error detection for lexical choice, confirming the hypothesis that L1 speakers rely on semantic knowledge from their native language during L2 learning. We test the second hypothesis by verifying that Russian speakers exhibit similar trends in errors with the speakers of other Slavic languages, and Spanish speakers with the speakers of other Romance languages. We find that the L1induced information from Russian and Spanish is effective in assessing lexical choice of the speakers of other languages for both language groups. 2 Related work 2.1 Error detection in content words Early approaches to collocation error detection relied on manually created databases of correct and incorrect word combinations (Shei and Pain, 2000; Wible et al., 2003; Chang et al., 2008). Constructing such databases is expensive and timeconsuming, and therefore, more recent research turned to the use of machine learning techniques. Leacock et al. (2014) note that most approaches to detection and correction of collocation errors compare the writer’s word choice to the set of alternatives using association strength measures and choose the combination with the highest score, reporting an error if this combination does not coincide with the original choice (Futagi et al., 2008; ¨Ostling and Knutsson, 2009; Liu et al., 2009). This strategy is expensive as it relies on comparison with a set of alternatives, limited in capacity as it depends on the quality of the alternatives generated and circular as the detection cannot be performed independently of the correction. Our approach alleviates these problems, since error detection depends on the original combination only. Some previous approaches focused on correction only (Dahlmeier and Ng, 2011; Kochmar and Briscoe, 2015), and although they show promising results, they have not attempted to perform error detection in lexical choice. Kochmar and Briscoe (2014) focus on error detection, but their system addresses adjective–noun combinations and does not use L1-induced information. 2.2 L1 factors in L2 writing The influence of an L1 on lexical choice in L2 and the resulting errors have been previously studied (Chang et al., 2008; ¨Ostling and Knutsson, 2009; Dahlmeier and Ng, 2011). These works focus on errors in particular L1s and use the translational equivalents directly to improve candidate selection and quality of corrections. Dahlmeier and Ng (2011) show that L1-induced paraphrases outperform approaches based on edit distance, homophones, and WordNet synonyms in selecting the appropriate corrections. Rozovskaya and Roth (2010) show that an error correction system for prepositions benefits from restricting the set of possible corrections to those observed in the non-native data. Rozovskaya and Roth (2011) further demonstrate that the models perform better when they use knowledge about error patterns of the non-native writers. According to their results, an error correction algorithm that relies on a set of priors dependent on the writer’s preposition and the writer’s L1 outperforms other methods. Madnani et al. (2008) show promising results in whole-sentence grammatical error correction using round-trip translations from Google Translate via 8 different pivot languages. The results of these studies suggest that L1 is a valuable source of information in EDC. However, all these works use isolated translational equivalents and focus on error correction only. In contrast, we construct holistic semantic models of L1 from L1 corpora and use these models to perform the more challenging task of error detection. 3 Data We first use large monolingual corpora in Spanish, Russian and English to build word association models for each of the languages. We then apply the resulting models for error detection in the English learner data. 975 3.1 L1 Data Spanish data The Spanish data was extracted from the Spanish Gigaword corpus (Mendonca et al., 2011), a one billion-word collection of news articles in Spanish. The corpus was parsed using the Spanish Malt parser (Nivre et al., 2007; Ballesteros et al., 2010). We extracted VERB–SUBJECT and VERB–DIRECT OBJECT relations from the output of the parser, which we then used to build an L1 word association model for Spanish. Russian data The Russian data was extracted from the RU-WaC corpus (Sharoff, 2006), a two billion-word representative collection of texts from the Russian Web. The corpus was parsed using Malt dependency parser for Russian (Sharoff and Nivre, 2011), and the VERB–SUBJECT and VERB–DIRECT OBJECT relations were extracted from the parser output to create an L1 word association model for Russian. Dictionaries and translation Once the L1 word associations have been computed for the verb– noun pairs, we identify possible translations for verbs and nouns (in each pair) in isolation, as a language learner might do. To create the translation dictionaries, we extracted translations from the English–Spanish and English–Russian editions of Wiktionary, both from the translation sections and the gloss sections if the latter contained single words as glosses. We focus on verb–noun pairs, therefore multi-word expressions were universally removed. We added inverse translations for every original translation. We then created separate translation dictionaries for each language and part-of-speech tag combination from the resulting collection of translations. 3.2 L2 data To build the English word association model, we have used a combination of the British National Corpus (Burnard, 2007) and the UKWaC (Baroni et al., 2009). The corpora were parsed by the RASP parser (Briscoe et al., 2006) and VERB– SUBJECT and VERB–DIRECT OBJECT relations were extracted from the parser output. Since the UKWaC is a Web corpus, we assume that the data contains a certain amount of noise, e.g. typographical errors, slang and non-words. We filter these out by checking that the verbs and nouns in the extracted relations are included in WordNet (Miller, 1995) with the appropriate part of speech. 3.3 Learner data To extract the verb–noun combinations that have been used by non-native speakers in practice, we use the Cambridge Learner Corpus (CLC), which is a 52.5 million-word corpus of learner English collected by Cambridge University Press and Cambridge English Language Assessment since 1993 (Nicholls, 2003). It comprises English examination scripts written by learners of English with 148 different L1s, ranging across multiple examinations and covering all levels of language proficiency. A 25.5 million-word component of the CLC has been manually error-annotated. We have preprocessed the CLC with the RASP parser (Briscoe et al., 2006), as it is robust when applied to ungrammatical sentences. We have then extracted all dobj and subj combinations: in total, we have extracted 187, 109 dobj and 225, 716 subj combinations. We have used the CLC error annotation to split the data into correct combinations and errors. We note that some verb–noun combinations are annotated both as being correct and as errors, depending on their wider context of use. To ensure that the annotation we use in our experiments is reliable and not context-dependent, we have empirically set a threshold to filter out ambiguously annotated instances. The set of correct word combinations includes only those word pairs that are used correctly in at least 70% of the cases they occur in the CLC; the set of errors includes only those that are used incorrectly at least 70% of the time. 3.4 Experimental datasets We split the annotated CLC data by language and relation type. Table 1 presents the statistics on the datasets collected.1 We extract the verb–noun combinations from the CLC texts written by native speakers of Russian (RU) and Spanish (ES) to test our first hypothesis, as well as by speakers of ALL L1s in the CLC to test our second hypothesis. We then filter the extracted relations using the translated verb–noun pairs from Russian and Spanish corpora. We note that Russian and Spanish have comparable number of word combinations in L1-specific subsets – 10K-12K for dobj and subj combinations – and comparable error rates (ERR). We also note that the error rates in the dobj sub1The data is available at http://www.cl.cam.ac. uk/˜ek358/cross-ling-data.html 976 Source CLC Total ERR (%) verbs nouns RUdobj RU 11, 184 12.55 786 1, 918 ALL 62, 923 14.02 1, 387 4, 168 RUsubj RU 10, 417 7.90 734 1, 775 ALL 63, 649 9.49 1, 403 4, 374 ESdobj ES 11, 959 14.66 705 1, 926 ALL 32, 966 15.17 1, 072 2, 928 ESsubj ES 9, 899 8.09 573 1, 733 ALL 26, 766 9.42 877 2, 762 Table 1: Statistics on the datasets collected. sets are higher than in subj subsets, presumably, because VERB–SUBJECT combinations allow for more flexibility in lexical choice. We find a large number of translated word combinations in other L1s, and it is interesting to note that the error rates are higher across multiple languages than in the same L1s, which corroborates our second hypothesis that the lexico-semantic models from L1s transfer to L2. The last two columns of Table 1 show how diverse our datasets are in terms of verbs and nouns used in the constructions: for example, RUdobj subset contains combinations with 786 different verbs and 1, 918 different nouns. 4 Methods Our approach to detecting lexico-semantic transfer errors relies on the intuition that a mismatch between the lexico-semantic models in two languages reveals itself in the difference in word association scores. We argue that a high association score of a verb–noun combination in L1 shows that it is a collocation in L1, but low association score of its translational equivalent in L2 signals an error in L2 stemming from the lexico-semantic transfer. Following previous research (Baldwin and Kim, 2010), we measure the strength of verb– noun association using pointwise mutual information (PMI). Figure 1 illustrates this intuition. In Russian, both *find decision vs. find solution have a high PMI score. However, in English the latter has a high PMI while the former has a negative PMI. We expect such a discrepancy in word association to be an indicator of error of lexical choice, driven by the L1 semantics. We treat the task of lexico-semantic transfer error detection as a binary classification problem and train a classifier for this task. The classifier uses a combination of L1 and L2 semantic features. If our hypothesis holds, we expect to see an improvement in the classifier’s performance when adding L1 semantic features. Figure 1: Russian to English interface for *find decision. 4.1 L2 lexico-semantic features We experiment with two types of L2 features: lexico-semantic features and semantic vector space features. Lexico-semantic features include: • pmi in L2: we estimate the association strength between the noun and verb using the combined BNC and UKWaC corpus; • verb and noun: the identity of the verb and the noun in the pair, encoded in a numerical form in the range of (0, 1). The motivation behind that step is that certain words are more error-prone than others and converting them into numerical features helps the classifier to use this information. Semantic vector space features Kochmar and Briscoe (2014) obtained state-of-the-art results in error detection by using the semantic component of the content word combinations. We reimplement these features and test their impact on our task. We extracted the noun and verb vectors from the publicly available word2vec dataset of word embeddings for 3 million words and phrases.2 The 300-dimensional vectors have been trained on a part of Google News dataset (about 100 billion words) using word2vec (Mikolov et al., 2013). The dobj and subj vectors are then built using element-wise addition on the vectors (Mitchell and Lapata, 2008; Mikolov et al., 2013; Kochmar and Briscoe, 2014). Once the compositional vectors are created, the method relies on the idea that correct combinations can be distinguished from the erroneous ones by certain vector properties (Vecchi et al., 2011; Kochmar and Briscoe, 2014). We implement a set of numerical features based on the following properties of the vectors: 2code.google.com/archive/p/word2vec/ 977 • length of the additive (vn) vector • cosvn∧n – cosine between the vn vector and the noun vector • cosvn∧v – cosine between the vn vector and the verb vector • dist10 – distance to the 10 nearest neighbours of the vn vector • lex-overlap – proportion of the 10 nearest neighbours of the vn vector containing the verb/noun • comp-overlap – overlap between the 10 neighbours of the vn vector and 10 neighbours of the verb/noun vector • cosv∧n – cosine between the verb and the noun vectors. The 10 nearest neighbours are retrieved in the combined semantic space containing word embeddings and additive phrase vectors. All features, except for the last one, have been introduced in previous work and showed promising results (Vecchi et al., 2011; Kochmar and Briscoe, 2014). For example, it has been shown that the distance from the constructed word combination vector to its nearest neighbours is one of the discriminative features of the error detection classifier. Manual inspection of the vectors and nearest neighbours shows that the closest neighbour to *find decision is see decision with the similarity of 0.8735 while the closest one to find solution is discover solution with the similarity of 0.9048. We implement an additional cosv∧n feature based on the intuition that the distance between the verb and noun vectors themselves may indicate a semantic mismatch and thus help in detecting lexical choice errors. 4.2 L1 lexico-semantic features We first quantified the strength of association between the L1 verbs and nouns in the original L1 data, using PMI. We then generated a set of possible translations for each verb–noun pair in L1 using the translation dictionaries. Each verb–noun pair in the CLC was then mapped to one of the translated L1 pairs and its L1 features. We used the following L1 features in classification: • pmi in L1: we estimate the strength of association on the original L1 corpora; • difference between the PMI of the verb– noun pair in L1 and in L2. 4.3 Classification Classifier settings We treat the task as a binary classification problem and apply a linear SVM classifier using scikit-learn LinearSVC implementation.3 The error rates in Table 1 show that we are dealing with a two-class problem where one class (correct word combinations) significantly outnumbers the other class (errors) by up to 11:1 (on RUsubj). To address the problem of class imbalance, we use subsampling: we randomly split the set of correct word combinations in n samples keeping the majority class baseline under 0.60, and run n experiments over the samples. We apply 10-fold cross-validation within each sample. The results reported in the following sections are averaged across the samples for each dataset. Evaluation The goal of the classifier is to detect errors, therefore we primarily focus on its performance on the error class and, in addition to accuracy, report precision (P), recall (R) and F1 on this class. Previous studies (Nagata and Nakatani, 2010) suggest that systems with high precision in detecting errors are more helpful for L2 learning than systems with high recall as non-native speakers find misidentified errors very misleading. In line with this research, we focus on maximising precision on the error class. Baseline We compare the performance of our different feature sets to the baseline classifier which uses L2 co-occurrence frequency of the verb and noun in the pair as a single feature. Frequency sets a competitive baseline as it is often judged to be the measure of acceptability of an expression and many previous works relied on the frequency of occurrence as an evidence of acceptability (Shei and Pain, 2000; Futagi et al., 2008). 5 Experimental Results To test our hypothesis that lexico-semantic models are transferred from L1 to L2, we first run the set of experiments on the L1 subsets of the CLC data, that is RU →RUCLC and ES →ESCLC, where the left-hand side of the notation denotes the lexico-semantic model and the right-hand side the L1 of the speakers that produced the word pairs extracted from the CLC. We incrementally add the features, starting with the set of lexico-semantic 3scikit-learn.org/ 978 L1 Features Acc Pe Re F1e RUdobj baseline 55.68 47.77 61.44 53.55 pmiEn 64.74 59.76 47.55 52.96 +verb 64.79 59.87 47.56 53.01 RUsubj baseline 54.48 46.30 63.96 53.17 pmiEn 67.02 58.86 62.74 60.74 +verb 67.64 59.84 62.17 60.98 ESdobj baseline 56.74 52.25 74.44 61.36 pmiEn 64.28 61.75 59.55 60.63 +verb 64.34 61.80 59.67 60.71 ESsubj baseline 54.45 46.71 70.31 56.00 pmiEn 69.22 61.35 68.83 64.87 +verb 69.51 61.79 68.58 65.00 Table 2: System performance (in %) using L2 lexico-semantic features, L1 →L1CLC. features in L2 that are readily available without reference to the L1, and later adding L1 semantic features, and measure their contribution. 5.1 L2 lexico-semantic features The first system configuration we experiment with uses the set of lexico-semantic features from L2. Table 2 reports the results. Our experiments show that a classifier that uses L2 PMI (pmiEn) as a single feature performs with relatively high accuracy: on all four datasets it outperforms the baseline classifier achieving an increase from 7.54% (on ESdobj) up to 14.77% (on ESsubj) in accuracy. Adding the noun as a feature decreases performance of the classifier and we do not further use this feature. The verb used as an additional feature consistently improves classifier performance. 5.2 L2 semantic vector space features Next, we test the combination of the semantic vector space features (sem) and combine them with two L2 lexico-semantic features including pmiEn and verb (denoted as ftEn hereafter for brevity). Table 3 reports the results. We note that the semantic vector space features on their own yield precision of 50% −52% on the error class in dobj combinations and lower than 50% on subj combinations. This suggests that the classifier misidentifies correct combinations as errors more frequently than it correctly detects errors. Moreover, recall of this system configuration is also low on all datasets. Adding the semantic vector space features to the other L2 semantic features, however, improves the performance, as shown in Table 3. As both groups of features refer to the phenomena in L2, the results suggest that they complement each other. L1 Features Acc Pe Re F1e RUdobj sem 58.36 50.72 6.98 12.22 +ftEn 65.90 58.64 62.18 60.35 RUsubj sem 58.62 36.07 3.40 6.12 +ftEn 68.37 60.05 66.48 63.07 ESdobj sem 54.51 52.01 20.78 29.48 +ftEn 66.87 63.36 67.08 65.16 ESsubj sem 58.63 49.37 9.27 15.47 +ftEn 70.75 62.21 74.31 67.72 Table 3: System performance (in %) using a combination of L2 semantic features, L1 →L1CLC. L1 Features Acc Pe Re F1e RUdobj ftEn 64.79 59.87 47.56 53.01 +pmiL1 66.05 58.74 62.72 60.67 RUsubj ftEn 67.64 59.88 62.17 60.98 +pmiL1 68.68 62.10 69.61 64.38 ESdobj ftEn 64.34 61.80 59.67 60.71 +pmiL1 66.89 63.01 68.61 65.68 ESsubj ftEn 69.51 61.79 68.58 65.00 +pmiL1 71.19 62.10 77.66 69.00 Table 4: System performance (in %) using L1 and L2 lexico-semantic features, L1 →L1CLC. 5.3 L1 lexico-semantic features Finally, we add the L1 lexico-semantic features to the well-performing L2 features (pmi and verb). The combination of L1 lexico-semantic features with the L2 lexico-semantic and semantic vector space features achieves lower results, therefore we do not report them here. The use of L1 pmi improves both the accuracy and the F-score of the error class (see Table 4). For the ease of comparison, we also include the results obtained using a combination of L1 lexico-semantic features (denoted ftEn). The addition of the explicit difference feature between the two PMIs has not yielded further improvement. This is likely to be due to the fact that the classifier already implicitly captures the knowledge of this difference in the form of individual L1 and L2 PMIs. We note that the system using a combination of L1 and L2 lexico-semantic features gains an absolute improvement in accuracy from 1.04% for RUsubj to 2.55% on ESdobj. The performance on the error class improves in all but one case (Pe on RUdobj), with an absolute increase in F1 up to 7.66%. The system has both a higher coverage in error detection (a rise in recall) and a higher precision. The improvement in performance across all four datasets is statistically significant at 0.05 level. These results demonstrate the effect of lexico-semantic model transfer from L1 to L2. 979 6 Effect on different L1s Next, we test our second hypothesis that a lexicosemantic model from one L1 is portable across several L1s, in particular, typologically related ones. We first experiment with the data representing all L1s in the CLC and then with the data representing a specific language group. We compare the performance of the baseline system using verb–noun co-occurrence frequency as a single feature, the system that uses L2 semantic features only and the system that combines both L2 and L1 semantic features. 6.1 Experiments on all L1s Table 1 shows that using the translated verb–noun combinations from our L1s (RU and ES) we are able to find a large amount of both correct and erroneous combinations in different L1s in the CLC including RU and ES (see ALL). This gives us an initial confirmation that the lexico-semantic models may be shared across multiple languages. We then experiment with error detection across all L1s represented in the CLC. The results are shown in Table 5. The baseline system achieves similar performance on RU →ALLCLC as on RU →RUCLC, and better performance on ES → ALLCLC than on ES →ESCLC. The results obtained with the L2 lexico-semantic features are also comparable: the system achieves an absolute increase in accuracy of up to 9.86% for the model transferred from RUsubj, reaching an accuracy of around 65 −66% with balanced performance in terms of precision and recall on errors. When the L1 lexico-semantic features are added to the model, we observe an absolute increase in the accuracy ranging from 0.57% (for RUsubj) to 1.43% (for ESdobj). The Spanish lexico-semantic model has a higher positive effect on all measures, including precision on the error class. Although the addition of the L1 lexico-semantic features does not have a significant effect on the accuracy and precision, the system achieves an absolute improvement in recall of up to 12.71% (on RUdobj). That is, the system that uses L1 lexico-semantic features is able to find more errors in the data originating with a set of different L1s. Generally, the results of the Spanish model are more stable and comparable to the results in the previous Section, which may be explained by the fact that Spanish is more well-represented in the CLC. L1 Features Acc Pe Re F1e RUdobj baseline 55.13 50.17 72.14 58.99 ftEn 63.58 59.73 57.98 58.85 +pmiL1 64.60 58.81 70.69 64.20 RUsubj baseline 54.56 47.95 71.10 56.71 ftEn 64.42 57.27 62.64 59.83 +pmiL1 64.99 57.24 68.17 62.21 ESdobj baseline 59.35 55.38 71.87 62.51 ftEn 64.32 61.89 63.47 62.67 +pmiL1 65.75 61.90 71.37 66.30 ESsubj baseline 58.34 50.90 66.97 57.48 ftEn 65.57 58.32 64.09 61.06 +pmiL1 66.54 58.80 68.72 63.36 Table 5: System performance (in %) using L1 and L2 lexico-semantic features, L1 →all L1s. 6.2 Experiments on related L1s The results on ALL L1s confirm our expectations: since we have extracted verb–noun combinations that originate with two particular L1s from the set of all different L1s in the CLC, and then used the L1 lexico-semantic features, the system is able to identify more errors thus we observe an improvement in recall. The precision, however, does not improve, possibly because the set of errors in ALL L1s is different from that in the two L1s we rely on to build the lexico-semantic models. The final question that we investigate is whether the lexicosemantic models of our L1s are directly portable to typologically related languages. If this is the case, we expect to see an effect on the precision of the classifier as well as on the recall. We experiment with the following groups of related languages ordered by the number of verb– noun pairs we found in the CLC data: • RU group: Russian, Polish, Czech, Slovak, Serbian, Croatian, Bulgarian, Slovene; • ES group: Spanish, Italian, Portuguese, French, Catalan, Romanian, Romansch. In addition to investigating the effect of the L1 lexico-semantic model on the whole language group, we also consider its effects on individual languages. We chose Polish for the RU model, and Italian for the ES model as these two languages have the most data representing their native speakers in the CLC. Table 6 shows the number of verb– noun combinations and error rates for the language groups and these individual languages. The results are presented in Tables 7 and 8. They exhibit similar trends in the change of the system performance on L1 →L1 GROUP as we 980 Source Targets Total ERR RUdobj Slavic 18, 721 9.19 Polish 11, 327 8.16 RUsubj Slavic 18, 511 6.80 Polish 11, 204 6.42 ESdobj Romance 18, 898 12.81 Italian 6, 375 10.92 ESsubj Romance 15, 871 7.57 Italian 5, 300 6.98 Table 6: Statistics on the L1 groups and related languages. L1 Features Acc Pe Re F1e RUdobj baseline 57.08 51.80 71.58 59.78 ftEn 64.20 60.99 55.36 58.04 +pmiL1 65.77 61.06 64.78 62.86 RUsubj baseline 56.43 49.52 62.04 54.24 ftEn 62.26 55.84 50.02 52.76 +pmiL1 62.78 56.02 54.48 55.21 ESdobj baseline 59.18 51.44 72.31 59.97 ftEn 65.14 59.82 53.83 56.66 +pmiL1 66.24 58.92 67.00 62.70 ESsubj baseline 58.10 52.95 77.43 62.45 ftEn 66.29 61.24 68.45 64.64 +pmiL1 67.00 61.68 70.50 65.78 Table 7: System performance (in %) using L1 and L2 lexico-semantic features, L1 →L1 GROUP. see for L1 →ALL L1s. Adding the L1 lexicosemantic features has only a minor effect on accuracy and precision, and a more pronounced effect on recall. On the contrary, when we test the system on one particular related L1 (Table 8) we observe the opposite effect: with the exception of ESsubj data, precision and accuracy improve, suggesting that the error detection system using L1-induced information identifies errors more precisely. Overall, the observed gains in performance indicate that L1 semantic models contribute information to lexical choice error detection in L2 for the speakers of typologically related languages. This in turn suggests that there may be less semantic variation within a language group than across different language groups. 7 Discussion and data analysis The best accuracy achieved in our experiments is 71.19% on ESsubj combinations. However, previous research suggests that error detection in lexical choice is a difficult task. For instance, Kochmar and Briscoe (2014) report that the agreement between human annotators on error detection in adjective–noun combinations is 86.50%. We then qualitatively assessed the performance of our systems by analysing what types of errors L1 Features Acc Pe Re F1e RUdobj baseline 55.04 47.68 63.87 53.81 ftEn 64.73 59.76 46.05 52.01 +pmiL1 65.15 60.63 45.77 52.16 RUsubj baseline 53.30 44.77 61.09 51.29 ftEn 61.84 54.63 35.81 43.22 +pmiL1 62.53 57.24 35.11 43.18 ESdobj baseline 55.25 51.67 76.79 61.21 ftEn 64.06 62.30 56.01 58.98 +pmiL1 65.21 63.44 58.13 60.66 ESsubj baseline 54.34 47.76 68.73 56.23 ftEn 62.71 58.80 43.09 49.69 +pmiL1 62.44 58.46 41.71 48.60 Table 8: System performance (in %) using L1 and L2 lexico-semantic features, L1 →REL L1. the classifiers reliably detect and what types of errors the classifiers miss across all runs over the samples. Some of the most reliably identified errors in both RU and ES datasets include: • verbs offer, propose and suggest which are often confused with each other. Correctly identified errors include *offer plan vs. suggest plan, *propose work vs. offer work and *suggest cost vs. offer cost; • verbs demonstrate and show where demonstrate is often used instead of show as in *chart demonstrates; • verbs say and tell particularly well identified with the ES model. Examples include *say idea instead of tell idea and *tell goodbye instead of say goodbye. These examples represent lexical choice errors when selecting among near-synonyms, and violations of verb subcategorization frames. The error in *find solution discussed throughout the paper is also reliably identified by the classifier across all runs. It is interesting to note that in the pair of verbs do and make, which are often confused with each other by both Russian and Spanish L1 speakers, errors involving make are identified more reliably than errors involving do: for example, *make business is correctly identified as an error, while *do joke is missed by the classifier. Many of the errors missed by the classifier are context-dependent. Some of the most problematic errors involve errors in combinations with verbs like be and become. Such errors do not result from an L1 lexico-semantic transfer and it is not surprising that the classifiers miss them. 981 8 Conclusion We have investigated whether lexico-semantic models from the native language are transferred to the second language, and what effect this transfer has on lexical choice in L2. We focused on two typologically different L1s – Russian and Spanish, and experimentally confirmed the hypothesis that statistical semantic models learned from these L1s significantly improve automatic error detection in L2 data produced by the speakers of the respective L1s. We also investigated whether the semantic models learned from particular L1s are portable to other languages, and in particular to languages that are typologically close to the investigated L1s. Our results demonstrate that L1 models improve the coverage of the error detection system on a range of other L1s. Acknowledgments We are grateful to the ACL reviewers for their helpful feedback. Ekaterina Kochmar’s research is supported by Cambridge English Language Assessment via the ALTA Institute. Ekaterina Shutova’s research is supported by the Leverhulme Trust Early Career Fellowship. References Bach E. and Chao W. 2008. Semantic universals and typology. In Chris Collins, Morten Christiansen and Shimon Edelman, eds., Language Universals (Oxford: Oxford University Press). Baldwin T. and Kim S. N. 2010. Multiword Expressions. In Handbook of Natural Language Processing, Second Edition, N. Indurkhya and F. J. Damerau (eds.), pp. 267–292. Ballesteros M., Herrera J., Francisco V., and Gerv´as P. 2010. A Feasibility Study on Low Level Techniques for Improving Parsing Accuracy for Spanish Using Maltparser. In Proceedings of the 6th Hellenic Conference on Artificial Intelligence: Theories, Models and Applications, pp. 39–48. Baroni M., Bernardini S., Ferraresi A., and Zanchetta E. 2009. The WaCky Wide Web: A Collection of Very Large Linguistically Processed Web-Crawled Corpora. Language Resources and Evaluation, 43(3): 209–226. Briscoe E., Carroll J., and Watson R. 2006. The Second Release of the RASP System. In Proceedings of the COLING/ACL-2006 Interactive Presentation Sessions, pp. 59–68. Burnard L. 2007. The British National Corpus, version 3 (BNC XML Edition). Distributed by Oxford University Computing Services on behalf of the BNC Consortium. http://www.natcorp.ox. ac.uk/. Chang Y.C., Chang J.S., Chen H.J., and Liou H.C. 2012. An automatic collocation writing assistant for Taiwanese EFL learners: A case of corpusbased NLP technology. Computer Assisted Language Learning, 21(3), pp. 283–299. Dahlmeier D. and Ng H.T. 2011. Correcting Semantic Collocation Errors with L1-induced Paraphrases. In Proceedings of the EMNLP-2011, pp. 107–117. Futagi Y., Deane P., Chodorow M., and Tetreault J. 2009. A computational approach to detecting collocation errors in the writing of non-native speakers of English. Computer Assisted Language Learning, 21(4), pp. 353–367. Joachims T. 1999. Making Large-Scale SVM Learning Practical. Advances in Kernel Methods – Support Vector Learning. B. Sch¨olkopf and C. Burges and A. Smola (ed.), MIT-Press. Kochmar E. and Briscoe T. 2014. Detecting Learner Errors in the Choice of Content Words Using Compositional Distributional Semantics. In Proceedings of the 25th International Conference on Computational Linguistics: Technical Papers, pp. 1740–1751 Kochmar E. and Briscoe T. 2015. Using Learner Data to Improve Error Correction in AdjectiveNoun Combinations. In Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications, pp. 233-242. Leacock C., Chodorow M., Gamon M. and Tetreault J. 2014. Automated Grammatical Error Detection for Language Learners. Morgan and Claypool Publishers. Liu A. L.-E., Wible D., and Tsao N.-L. 2009. Automated suggestions for miscollocations. In Proceedings of the 4th Workshop on Innovative Use of NLP for Building Educational Applications, pp. 47–50. Madnani N., Tetreault J., and Chodorow M. 2012. Exploring Grammatical Error Correction with Not-SoCrummy Machine Translation. In Proceedings of the 7th Workshop on the Innovative Use of NLP for Building Educational Applications, pp. 44-53. Mendonca A., Jaquette D., Graff D., and DiPersio D. 2011. Spanish Gigaword Third Edition. Linguistic Data Consortium, Philadelphia. Mikolov T., Sutskever I., Chen K., Corrado G., and Dean J. 2013. Distributed Representations of Words and Phrases and their Compositionality. In Proceedings of NIPS. 982 Mikolov T., Yih W.-T., and Zweig G. 2013. Linguistic Regularities in Continuous Space Word Representations. In Proceedings of NAACL HLT. Miller G. A. 1995. WordNet: A Lexical Database for English. Communications of the ACM, 38(11): 39– 41. Mitchell J. and Lapata M. 2008. Vector-based models of semantic composition. In Proceedings of ACL, pp. 236–244. Mitchell J. and Lapata M. 2010. Composition in distributional models of semantics. Cognitive Science, 34, pp. 1388–1429. Nagata, R. and Nakatani, K. 2010. Evaluating Performance of Grammatical Error Detection to Maximize Learning Effect. In Proceedings of COLING (Posters), pp. 894-900. Ng, H.T., Wu, S. M., Briscoe, T., Hadiwinoto, C., Susanto, R. H., Bryant, C. 2014. The CoNLL-2014 Shared Task on Grammatical Error Correction. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task, pp. 1–14. Nicholls D. 2003. The Cambridge Learner Corpus: Error coding and analysis for lexicography and ELT. In Proceedings of the Corpus Linguistics conference, pp. 572–581. Nivre J., Hall J., Nilsson J., Chanev A., Eryigit G., K¨ubler S., Marinov S., and Marsi E. 2007. MaltParser: A language-independent system for datadriven dependency parsing. Natural Language Engineering, 2(13):95–135. Odlin T. 1989. Language transfer: Cross-linguistic influence in language learning. Cambridge University Press. ¨Ostling R. and Knutsson O. 2009. A corpus-based tool for helping writers with Swedish collocations. In Proceedings of the Workshop on Extracting and Using Constructions in NLP, NODALIDA, pp. 28– 33. Park T., Lank E., Poupart P., and Terry M. 2008. Is the sky pure today? AwkChecker: an assistive tool for detecting and correcting collocation errors. In Proceedings of the 21st annual ACM symposium on User interface software and technology, pp. 121– 130. Rozovskaya A. and Roth D. 2010. Generating Confusion Sets for Context-Sensitive Error Correction. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pp. 961970. Rozovskaya A. and Roth D. 2011. Algorithm Selection and Model Adaptation for ESL Correction Tasks. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies – Volume 1, pp. 924–933. Sharoff S. 2006. Creating General-Purpose Corpora Using Automated Search Engine Queries. WaCky! Working papers on the Web as Corpus, Marco Baroni and Silvia Bernardini (ed.). Sharoff S. and Nivre J. 2011. The proper place of men and machines in language technology Processing Russian without any linguistic knowledge. Dialogue 2011, Russian Conference on Computational Linguistics. Shei C.C. and Pain H. 2000. An ESL Writer’s Collocation Aid. Computer Assisted Language Learning, 13(2), pp. 167–182. Vecchi E., Baroni M. and Zamparelli R. 2011. (Linear) maps of the impossible: Capturing semantic anomalies in distributional space. In Proceedings of the DISCO Workshop at ACL-2011, pp. 1–9. Wible H., Kwo C.-H., Tsao N.-L., Liu A., and Lin H.L. 2003. Bootstrapping in a language-learning environment. Journal of Computer Assisted Learning, 19(4), pp. 90–102. 983
2016
92
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 984–993, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics A CALL System for Learning Preposition Usage John Lee Department of Linguistics and Translation City University of Hong Kong [email protected] Donald Sturgeon Fairbank Center for Chinese Studies Harvard University [email protected] Mengqi Luo Department of Linguistics and Translation City University of Hong Kong [email protected] Abstract Fill-in-the-blank items are commonly featured in computer-assisted language learning (CALL) systems. An item displays a sentence with a blank, and often proposes a number of choices for filling it. These choices should include one correct answer and several plausible distractors. We describe a system that, given an English corpus, automatically generates distractors to produce items for preposition usage. We report a comprehensive evaluation on this system, involving both experts and learners. First, we analyze the difficulty levels of machine-generated carrier sentences and distractors, comparing several methods that exploit learner error and learner revision patterns. We show that the quality of machine-generated items approaches that of human-crafted ones. Further, we investigate the extent to which mismatched L1 between the user and the learner corpora affects the quality of distractors. Finally, we measure the system’s impact on the user’s language proficiency in both the short and the long term. 1 Introduction Fill-in-the-blank items, also known as gap-fill or cloze items, are a common form of exercise in computer-assisted language learning (CALL) applications. Table 1 shows an example item designed for teaching English preposition usage. It contains a sentence, “The objective is to kick the ball into the opponent’s goal”, with the preposition “into” blanked out; this sentence serves as the stem (or carrier sentence). It is followed by four choices for the blank, one of which is the key (i.e., the correct answer), and the other three are distractors. These choices enable the CALL application to provide immediate and objective feedback to the learner. A high-quality item must meet multiple requirements. It should have a stem that is fluent and matches the reading ability of the learner; a blank that is appropriate for the intended pedagogical goal; exactly one correct answer among the choices offered; and finally, a number of distractors that seem plausible to the learner, and yet would each yield an incorrect sentence. Relying on language teachers to author these items is time consuming. Automatic generation of these items would not only expedite item authoring, but also potentially provide personalized items to suit the needs of individual learners. This paper addresses two research topics: • How do machine-generated items compare with human-crafted items in terms of their quality? • Do these items help improve the users’ language proficiency? For the first question, we focus on automatic generation of preposition distractors, comparing three different methods for distractor generation. One is based on word co-occurrence in standard The objective is to kick the ball the opponent’s goal. (A) in (B) into (C) to (D) with Table 1: An automatically generated fill-in-theblank item, where “into” is the key, and the other three choices are distractors. 984 corpora; a second leverages error annotations in learner corpora; the third, a novel method, exploits learners’ revision behavior. Further, we investigate the effect of tailoring distractors to the user’s native language (L1). For the second question, we measure users’ performance in the short and in the long term, through an experiment involving ten subjects, in multiple sessions tailored to their proficiency and areas of weakness. Although a previous study has shown that learner error statistics can produce competitive items for prepositions on a narrow domain (Lee and Seneff, 2007), a number of research questions still await further investigation. Through both expert and learner evaluation, we will compare the quality of carrier sentences and the plausibility of automatically generated distractors against human-crafted ones. Further, we will measure the effect of mismatched L1 between the user and the learner corpora, and the short- and long-term impact on the user’s preposition proficiency. To the best of our knowledge, this paper offers the most detailed evaluation to-date covering all these aspects. The rest of the paper is organized as follows. Section 2 reviews previous work. Section 3 outlines the algorithms for generating the fill-in-theblank items. Section 4 gives details about the experimental setup and evaluation procedures. Section 5 analyzes the results. Section 6 concludes the paper. 2 Previous Work 2.1 Distractor generation Most research effort on automatic generation of fill-in-the-blank items has focused on vocabulary learning. In these items, the key is typically from an open-class part-of-speech (POS), e.g., nouns, verbs, or adjectives. To ensure that the distractor results in an incorrect sentence, the distractor must rarely, or never, collocate with other words in the carrier sentence (Liu et al., 2005). To ensure the plausibility of the distractor, most approaches require it to be semantically close to the key, as determined by a thesaurus (Sumita et al., 2005; Smith et al., 2010), an ontology (Karamanis et al., 2006), rules handcrafted by experts (Chen et al., 2006), or contextsensitive inference rules (Zesch and Melamud, 2014); or to have similar word frequency (Shei, 2001; Brown et al., 2005). Sakaguchi et al. (2013) applied machine learning methods to select verb distractors, and showed that they resulted in items that can better predict the user’s English proficiency level. Less attention has been paid to items for closedclass POS, such as articles, conjunctions and prepositions, which learners also often find difficult (Dahlmeier et al., 2013). For these POS, the standard algorithms based on semantic relatedness for open-class POS are not applicable. Lee and Seneff (2007) reported the only previous study on using learner corpora to generate items for a closed-class POS. They harvested the most frequent preposition errors in a corpus of Japanese learners of English (Izumi et al., 2003), but performed an empirical evaluation with native Chinese speakers on a narrow domain. We expand on this study in several dimensions. First, carrier sentences, selected from the general domain rather than a specific one, will be analyzed in terms of their difficulty level. Second, distractor quality will be evaluated not only by learners but also by experts, who give scores based on their plausibility; in contrast to most previous studies, their quality will be compared with the human gold standard. Thirdly, the effect of mismatched L1 will also be measured. 2.2 Learner error correction There has been much recent research on automatic correction of grammatical errors. Correction of preposition usage errors, in particular, has received much attention. Our task can be viewed as the inverse of error correction — ensuring that the distractor yields an incorrect sentence — with the additional requirement on the plausibility of the distractor. Most approaches in automatic grammar correction can be classified as one of three types, according to the kind of statistics on which the system is trained. Some systems are trained on examples of correct usage (Tetreault and Chodorow, 2008; Felice and Pulman, 2009). Others are trained on examples of pairs of correct and incorrect usage, either retrieved from error-annotated learner corpora (Han et al., 2010; Dahlmeier et al., 2013) or simulated (Lee and Seneff, 2008; Foster and Andersen, 2009). More recently, a system has been trained on revision statistics from Wikipedia (Cahill et al., 2013). We build on all three paradigms, using standard English cor985 ... kick the ball into the opponent’s goal VP head prep obj prep pobj Figure 1: Parse tree for the carrier sentence in Table 1. Distractors are generated on the basis of the prepositional object (“obj”) and the NP/VP head to which the prepositional phrase is attached (Section 3). pora (Section 3.1), error-annotated learner corpora (Section 3.2) and learner revision corpora (Section 3.3) as resources to predict the most plausible distractors. 3 Item generation The system assumes as input a set of English sentences, which are to serve as candidates for carrier sentences. In each candidate sentence, the system scans for prepositions, and extracts two features from the linguistic context of each preposition: • The prepositional object. In Figure 1, for example, the word “goal” is the prepositional object of the key, “into”. • The head of the noun phrase or verb phrase (NP/VP head) to which the prepositional phrase (PP) is attached. In Figure 1, the PP “into the opponent’s goal” is attached to the VP head “kick”. The system passes these two features to the following methods to generate distractors.1 If all three methods are able to return a distractor, the preposition qualifies to serve as the key. If more than one key is found, the system randomly chooses one of them. In the rest of this paper, we will sometimes abbreviate these three methods as the “Co-occur” (Section 3.1), “Error” (Section 3.2), and “Revision” (Section 3.3) methods, respectively. 3.1 Co-occurrence method Proposed by Lee and Seneff (2007), this method requires co-occurrence statistics from a large corpus of well-formed English sentences. 1We do not consider errors where a preposition should be inserted or deleted. Co-occurrence method (“Co-occur”) ... kicked the chair with ... ... kicked the can with ... ... with the goal of ... Learner error method (“Error”) ... kicked it <error>in</error> the goal. ... kick the ball <error>in</error> the other team’s goal. Learner revision method (“Revision”) ... kick the ball to into his own goal. ... kick the ball to towards his own goal. Table 2: The Co-occurrence Method (Section 3.1) generates “with” as the distractor for the carrier sentence in Figure 1; the Learner Error Method (Section 3.2) generates “in”; the Learner Revision Method (Section 3.3) generates “to”. This method first retrieves all prepositions that co-occur with both the prepositional object and the NP/VP head in the carrier sentence. These prepositions are removed from consideration as distractors, since they would likely yield a correct sentence. The remaining candidates are those that cooccur with either the prepositional object or the NP/VP head, but not both. The more frequently the candidate co-occurs with either of these words, the more plausible it is expected to appear to a learner. Thus, the candidate with the highest cooccurrence frequency is chosen as the distractor. As shown in Table 2, this method generates the distractor “with” for the carrier sentence in Figure 1, since many instances of “kick ... with” and “with ... goal” are attested. 3.2 Learner error method This method requires examples of English sentences from an error-annotated learner corpus. The corpus must mark wrong preposition usage, but does not need to provide corrections for the errors. This method first retrieves all PPs that have the given prepositional object and are attached to the given NP/VP head. It then computes the frequency of prepositions that head these PPs and are marked as wrong. The one that is most frequently marked as wrong is chosen as the distractor. As shown in Table 2, this method generates the distractor “in” for the carrier sentence in Figure 1, since it is often marked as an error. 986 3.3 Learner revision method It is expensive and time consuming to annotate learner errors. As an alternative, we exploit the revision behavior of learners in their English writing. This method requires draft versions of texts written by learners. In order to compute statistics on how often a preposition in an earlier draft (“draft n”) is replaced with another one in the later draft (“draft n + 1”), the sentences in successive drafts must be sentence- and word-aligned. This method scans for PPs that have the given prepositional object and are attached to the given NP/VP head. For all learner sentences in draft n that contain these PPs, it consults the sentences in draft n+1 to which they are aligned; it retains only those sentences whose prepositional object and the NP/VP head remain unchanged, but whose preposition has been replaced by another one. Among these sentences, the method selects the preposition that is most frequently edited between two drafts. Our assumption is that frequent editing implies a degree of uncertainty on the part of the learner as to which of these prepositions is in fact correct, thus suggesting that they may be effective distractors. As shown in Table 2, this method generates the distractor “to” for the carrier sentence in Figure 1, since it is most often edited in the given linguistic context. This study is the first to exploit a corpus of learner revision history for item generation.2 4 Experimental setup In this section, we first describe our datasets (Section 4.1) and the procedure for item generation (Section 4.2). We then give details on the expert evaluation (Section 4.3) and the learner evaluation (Section 4.4). 4.1 Data Carrier sentences. We used sentences in the English portion of the Wikicorpus (Reese et al., 2010) as carrier sentences. To avoid selecting stems with overly difficult vocabulary, we ranked the sentences in terms of their most difficult word. We measured the difficulty level of a word firstly with the graded English vocabulary lists compiled by the Hong Kong Education Bureau (EDB, 2012); and secondly, for words not occurring in 2A similar approach, using revision statistics in Wikipedia, has been used for the purpose of correcting preposition errors (Cahill et al., 2013). any of these lists, with frequency counts derived from the Google Web Trillion Word Corpus.3 In order to retrieve the prepositional object and the NP/VP head (cf. Section 3), we parsed the Wikicorpus, as well as the corpora mentioned below, with the Stanford parser (Manning et al., 2014). Co-occurrence method (“Co-occur”). The statistics for the Co-occurrence method were also based on the English portion of Wikicorpus. Learner Revision method (“Revision”). We used an 8-million-word corpus of essay drafts written by Chinese learners of English (Lee et al., 2015). This corpus contains over 4,000 essays, with an average of 2.7 drafts per essay. The sentences and words between successive drafts have been automatically aligned. Learner Error method (“Error”). In addition to the corpus of essay drafts mentioned above, we used two other error-annotated learner corpora. The NUS Corpus of Learner English (NUCLE) contains one million words of academic writing by students at the National University of Singapore (Dahlmeier et al., 2013). The EF-Cambridge Open Language Database (EFCAMDAT) contains over 70 million words from 1.2 million assignments written by learners from a variety of linguistic background (Geertzen et al., 2013). A subset of the database has been error-annotated. We made use of the writings in this subset that were produced by students from China and Russia. Human items (“Textbook”). To provide a comparison with human-authored items, we used the practise tests for preposition usage offered in an English exercise book designed for intermediate and advanced learners (Watcyn-Jones and Allsop, 2000). From the 50 tests in a variety of formats, we harvested 56 multiple-choice items, all of which had one key and three distractors. 4.2 Item generation procedure We gathered three sets of 400 carrier sentences, for use in three evaluation sessions (see Section 4.4). Each sentence in Set 1 has one counterpart in Set 2 and one counterpart in Set 3 that have the same key, NP/VP head and prepositional object. We will refer to the items created from these counterpart carrier sentences as “similar” items. We will use these “similar” items to measure the learning impact on the subjects. Each item has one key and distractors generated 3http://norvig.com/ngrams/ 987 by each of the three methods. For about half of the items, the three methods complemented one another to offer three distinct distractors. In the other half, two of the methods yielded the same distractor, resulting in only two distractors for those items. In Set 1, for control purposes, 56 of the items were replaced with the human items. 4.3 Expert evaluation procedure Two professional English teachers (henceforth, the “experts”) examined each of the 400 items in Set 1. They annotated each item, and each choice in the item, as follows. For each item, the experts labeled its difficulty level in terms of the preposition usage being tested in the carrier sentence. They did not know whether the item was human-authored or machine-generated. Based on their experience in teaching English to native speakers of Chinese, they labeled each item as suitable for those in “Grades 1-3”, “Grades 4-6”, “Grades 7-9”, “Grades 10-12”, or “>Grade 12”. We mapped these five categories to integers — 2, 5, 8, 11 and 13, respectively — for the purpose of calculating difficulty scores. For each choice in the item, the experts judged whether it is correct or incorrect. They did not know whether each choice was the key or a distractor. They may judge one, multiple, or none of the choices as correct. For an incorrect choice, they further assessed its plausibility as a distractor, again from their experience in teaching English to native speakers of Chinese. They may label it as “Plausible”, “Somewhat plausible”, or “Obviously wrong”. 4.4 Learner evaluation procedure Ten university students (henceforth, the “learners”) took part in the evaluation. They were all native Chinese speakers who did not major in English. The evaluation consisted of three one-hour sessions held on different days. At each session, the learner attempted 80 items on a browser-based application (Figure 2). The items were distributed in these sessions as follows. Session 1. The 400 items in Set 1 were divided into 5 groups of 80 items, with 11 to 12 human items in each group. The items in each group had comparable difficulty levels as determined by the experts, with average scores ranging from 7.9 to 8.1. Each group was independently attempted by two learners. The system recorded the items to Figure 2: Interface for the learner evaluation. On the left, the learner selects a choice by tapping on it; on the right, the learner receives feedback. which the learner gave wrong answers; these will be referred to as the “wrong items”. Among the items to which the learner gave correct answers, the system randomly set aside 10 items; these will be referred to as “control items”. Session 2. To measure the short-term impact, Session 2 was held on the day following Session 1. Each learner attempted 80 items, drawn from Set 2. These items were personalized according to the “wrong items” of the individual learner. For example, if a learner had 15 “wrong items” from Session 1, he or she then received 15 similar items4 from Set 2. In addition, he or she also received ten items that were similar to the “control items” from Session 1. The remaining items were drawn randomly from Set 2. As in Session 1, the system noted the “wrong items” and set aside ten “control items”. Session 3. To test the long-term effect of these exercises, Session 3 was held two weeks after Session 2. Each learner attempted another 80 items, drawn from Set 3. These 80 items were chosen in the same manner as in Session 2. 5 Results We first report inter-annotator agreement between the two experts on the difficulty levels of the carrier sentences and the distractors (Section 5.1). We then compare the difficulty levels of the humanand machine-generated items (Section 5.2). Next, we analyze the reliability and difficulty5 of the 4See definition of “similar” in Section 4.2. 5Another metric, “validity”, measures the ability of the distractor to discriminate between students of different proficiency levels. This metric is relevant for items intended for 988 Figure 3: The difficulty level of the items in Set 1, as annotated by the experts. automatically generated distractors (Sections 5.3 and 5.4), and the role of the native language (Section 5.5). Finally, we measure the impact on the learners’ preposition proficiency (Section 5.6). 5.1 Inter-annotator agreement For estimating the difficulty level of the preposition usage in the carrier sentences, the experts reached “substantial” agreement with kappa at 0.765 (Landis and Koch, 1977). In deciding whether a choice is correct or incorrect, the experts reached “almost perfect” agreement with kappa at 0.977. On the plausibility of the distractors, they reached “moderate” agreement with kappa at 0.537. The main confusion was between the categories “Obviously wrong” and “Somewhat plausible”. On the whole, expert judgment tended to correlate with actual behavior of the learners. For distractors considered “Plausible” by both experts, 63.6% were selected by the learners. In contrast, for those considered “Obviously wrong” by both experts, only 11.8% attracted any learner. 5.2 Carrier sentence difficulty Figure 3 shows the distribution of difficulty level scores for the preposition usage in carrier sentences. Most items were rated as “Grades 7-9”, with “Grades 4-6” being the second largest group. A common concern over machine-generated items is whether the machine can create or select the kind of carrier sentences that illustrate challenging or advanced preposition usage, compared to those crafted by humans. In our system, the preposition errors and revisions in the learner corpora — as captured by the NP/VP head and the assessment purposes (Brown et al., 2005; Sakaguchi et al., 2013) rather than self-learning. prepositional object — effectively served as the filter for selecting carrier sentences. Some of these errors and revisions may well be careless or trivial mistakes, and may not necessarily lead to the selection of appropriate carrier sentences. To answer this question, we compared the difficulty levels of preposition usage in the machinegenerated and human-crafted items. The average difficulty score for the human items was 8.7, meaning they were suitable for those in Grade 8. The average for the machine-generated items were lower, at 7.2. This result suggests that our system can select carrier sentences that illustrate challenging preposition usage, at a level that is only about 1.5 grade points below those designed by humans. 5.3 Distractor reliability A second common concern over machinegenerated items is whether their distractors might yield correct sentences. When taken out of context, a carrier sentence often admits multiple possible answers (Tetreault and Chodorow, 2008; Lee et al., 2009). In this section, we compare the performance of the automatic distractor generation methods against humans. A distractor is called “reliable” if it yields an incorrect sentence. The Learner Revision method generated the most reliable distractors6; on average, 97.4% of the distractors were judged incorrect by both experts (Table 3). The Co-occurrence method ranked second at 96.1%, slightly better than those from the Learner Error method. Many distractors from the Learner Error method indeed led to incorrect sentences in their original contexts, but became acceptable when their carrier sentences were read in isolation. Items with unreliable distractors were excluded from the learner evaluation. Surprisingly, both the Learner Revision and Cooccurrence methods outperformed the humans. Distractors in some of the human items did indeed yield sentences that were technically correct, and were therefore deemed “unreliable” by the experts. In many cases, however, these distractors were accompanied with keys that provided more natural choices. These items, therefore, remained valid. 6The difference with the Co-occurrence method is not statistically significant, in part due to the small sample size. 989 Method Reliable distractor Co-occur 96.1% Error 95.6% Revision 97.4% Textbook 95.8% Table 3: Distractors judged reliable by both experts. 5.4 Distractor difficulty In the context of language learning, an item can be considered more useful if one of its distractors elicits a wrong choice from the learner, who would then receive corrective feedback. In this section, we compare the “difficulty” of the distractor generated by the various methods, in terms of their ability to attract the learners. Expert evaluation. The two methods based on learner statistics produced the highest-quality distractors (Table 4). The Learner Error method had the highest rate of plausible distractors (51.2%) and the lowest rate of obviously wrong ones (22.0%). In terms of the number of distractors considered “Plausible”, this method significantly outperformed the Learner Revision method.7 According to Table 4, all three automatic methods outperformed the humans in terms of the number of distractors rated “Plausible”. This comparison, however, is not entirely fair, since the human items always supplied three distractors, whereas about half of the machine-generated items supplied only two, when two of the methods returned the same distractor. An alternate metric is to compute the average number of distractors rated “Plausible” per item. On average, the human items had 0.91 plausible distractors; in comparison, the machine-generated items had 1.27. This result suggests that automatic generation of preposition distractors can perform at the human level. Learner evaluation. The most direct way to evaluate the difficulty of a distractor is to measure how often a learner chose it. The contrast is less clear cut in this evaluation. Overall, the learners correctly answered 76.2% of the machinegenerated items, and 75.5% of the human items, suggesting that the human distractors were more challenging. One must also take into account, however, the fact that the carrier sentences are 7p < 0.05 by McNemar’s test, for both expert annotators. Method Plausible SomeObviouswhat ly plausible wrong Co-occur 34.6% 31.5% 33.9% Error 51.2% 26.8% 22.0% Revision 45.4% 28.5% 26.1% Textbook 31.4% 34.2% 34.5% Table 4: Plausibility judgment of distractors by experts. more difficult in the human items than in the machine-generated ones. Broadly speaking, the machine-generated distractors were almost as successful as those authored by humans. Consistent with the experts’ opinion (Table 4), the Learner Error method was most successful among the three automatic methods (Table 5). The learner selection rate of its distractors was 13.5%, which was significantly higher8 than its closest competitor, the Learner Revision method, at 9.5%. The Co-occurrence method ranked last, at 9.2%. It is unfortunately difficult to directly compare these rates with that of the human distractors, which they were offered in different carrier sentences. 5.5 Impact of L1 We now turn our attention to the relation between the native language (L1) of the user, and that of the learner corpora used for training the system. Specifically, we wish to measure the gain, if any, in matching the L1 of the user with the L1 of the learner corpora. To this end, for the Learner Error method, we generated distractors from the EFCambridge corpus with two sets of statistics: one harvested from the portion of the corpus with writings by Chinese students, the others from the portion by Russian students. Expert evaluation. Table 6 contrasts the experts’ plausibility judgment on distractors generated from these two sets. Chinese distractors were 8p < 0.05 by McNemar’s test. Method Learner selection rate Co-occur 9.2% Error 13.5% Revision 9.5% Table 5: Percentage of distractors selected by learners. 990 Method Plausible SomeObviouswhat ly plausible wrong Chinese 57.7% 24.0% 18.3% Russian 55.3% 22.0% 22.7% Table 6: Plausibility judgment of distractors generated from the Chinese and Russian portions of the EF-Cambridge corpus, by experts. slightly more likely to be rated “plausible” than the Russian ones, and less likely to be rated “obviously wrong”.9 The gap between the two sets of distractors was smaller than may be expected. Learner evaluation. The difference was somewhat more pronounced in terms of the learners’ behavior. The learners selected Chinese distractors, which matched their L1, 29.9% of the time over the three sessions. In contrast, they fell for the Russian distractors, which did not match their L1, only 25.1% of the time. This result confirms the intuition that matching L1 improves the plausibility of the distractors, but the difference was nonetheless relatively small. This result suggets that it might be worth paying the price for mismatched L1s, in return for a much larger pool of learner statistics. 5.6 Impact on learners In this section, we consider the impact of these exercises on the learners. The performance of the learners was rather stable across all sessions; their average scores in the three sessions were 73.0%, 73.6% and 69.9%, respectively. It is difficult, however, to judge from these scores whether the learners benefited from the exercises, since the composition of the items differed for each session. Instead, we measured how often the learners retain the system feedback. More specifically, if the learner chose a distractor and received feedback (cf. Figure 2), how likely would he or she succeed in choosing the key in a “similar”10 item in a subsequent session. We compared the learners’ responses between Sessions 1 and 2 to measure the short-term impact, and between Sessions 2 and 3 to measure the longterm impact. In Session 2, when the learners at9Data sparseness prevented us from generating both Chinese and Russian distractors for the same carrier sentences for evaluation. These statistics are therefore not controlled with regard to the difficulty level of the sentences. 10See definition of “similar” in Section 4.2. Difficulty level Retention rate Below 6 74.0% 6-8 71.3% 9-11 60.0% 12 or above 25% Table 7: Retention rate for items at different levels of difficulty. tempted items that were “similar” to their “wrong items” from Session 1, they succeeded in choosing the key in 72.4% of the cases.11 We refer to this figure as the “retention rate”, in this case over the one-day period between the two sessions. The retention rate deteriorated over a longer term. In Session 3, when the learners attempted items that were “similar” to their “wrong items” from Session 2, which took place two weeks before, they succeeded only in 61.5% of the cases.12 Further, we analyzed whether the difficulty level of the items affected their retention rate. Statistics in Table 7 show that the rate varied widely according to the difficulty level of the “wrong items”. Difficult items, at Grade 12 or beyond, proved hardest to learn, with a retention rate of only 25%. At the other end of the spectrum, those below Grade 6 were retained 74% of the time. This points to the need for the system to reinforce difficult items more frequently. 6 Conclusions We have presented a computer-assisted language learning (CALL) system that automatically creates fill-in-the-blank items for prepositions. We found that the preposition usage tested in automatically selected carrier sentences were only slightly less challenging than those crafted by humans. We compared the performance of three methods for distractor generation, including a novel method that exploits learner revision statistics. The method based on learner error statistics yielded the most plausible distractors, followed by the one based on learner revision statistics. The items produced jointly by these automatic methods, in both expert and learner evaluations, rivalled the quality of human-authored items. Further, we evaluated the extent to which mismatched 11As a control, the retention rate for correctly answered items in Session 1 was 80% in Session 2. 12As a control, the retention rate for correctly answered items in Session 2 was 69.0% in Session 3. 991 native language (L1) affects distractor plausibility. Finally, in a study on the short- and long-term impact on the learners, we showed that difficult items had lower retention rate. In future work, we plan to conduct larger-scale evaluations to further validate these results, and to apply these methods on other common learner errors. Acknowledgments We thank NetDragon Websoft Holding Limited for their assistance with system evaluation, and the reviewers for their very helpful comments. This work was partially supported by an Applied Research Grant (Project no. 9667115) from City University of Hong Kong. References Jonathan C. Brown, Gwen A. Frishkoff, and Maxine Eskenazi. 2005. Automatic Question Generation for Vocabulary Assessment. In Proc. HLT-EMNLP. Aoife Cahill, Nitin Madnani, Joel Tetreault, and Diane Napolitano. 2013. Robust Systems for Preposition Error Correction using Wikipedia Revisions. In Proc. NAACL-HLT. Chia-Yin Chen, Hsien-Chin Liou, and Jason S. Chang. 2006. FAST: An Automatic Generation System for Grammar Tests. In Proc. COLING/ACL Interactive Presentation Sessions. Daniel Dahlmeier, Hwee Tou Ng, and Siew Mei Wu. 2013. Building a Large Annotated Corpus of Learner English: The NUS Corpus of Learner English. In Proc. 8th Workshop on Innovative Use of NLP for Building Educational Applications. EDB. 2012. Enhancing English Vocabulary Learning and Teaching at Secondary Level. http://www.edb.gov.hk/vocab learning sec. Rachele De Felice and Stephen Pulman. 2009. Automatic Detection of Preposition Errors in Learner Writing. CALICO Journal, 26(3):512–528. Jennifer Foster and Øistein E. Andersen. 2009. GenERRate: Generating Errors for Use in Grammatical Error Detection. In Proc. 4th Workshop on Innovative Use of NLP for Building Educational Applications. Jeroen Geertzen, Theodora Alexopoulou, and Anna Korhonen. 2013. Automatic Linguistic Annotation of Large Scale L2 Databases: The EF-Cambridge Open Language Database (EFCAMDAT). In Proc. 31st Second Language Research Forum (SLRF). Na-Rae Han, Joel Tetreault, Soo-Hwa Lee, and JinYoung Ha. 2010. Using Error-annotated ESL Data to Develop an ESL Error Correction System. In Proc. LREC. Emi Izumi, Kiyotaka Uchimoto, Toyomi Saiga, Thepchai Supnithi, and Hitoshi Isahara. 2003. Automatic Error Detection in the Japanese Learners’ English Spoken Data. In Proc. ACL. Nikiforos Karamanis, Le An Ha, and Ruslan Mitkov. 2006. Generating Multiple-Choice Test Items from Medical Text: A Pilot Study. In Proc. 4th International Natural Language Generation Conference. J. Richard Landis and Gary G. Koch. 1977. The Measurement of Observer Agreement for Categorical Data. Biometrics, 33:159–174. John Lee and Stephanie Seneff. 2007. Automatic generation of cloze items for prepositions. In Proc. Interspeech. John Lee and Stephanie Seneff. 2008. Correcting Misuse of Verb Forms. In Proc. ACL. John Lee, Joel Tetreault, and Martin Chodorow. 2009. Human Evaluation of Article and Noun Number Usage: Influences of Context and Construction Variability. In Proc. Linguistic Annotation Workshop. John Lee, Chak Yan Yeung, Amir Zeldes, Marc Reznicek, Anke L¨udeling, and Jonathan Webster. 2015. CityU Corpus of Essay Drafts of English Language Learners: a Corpus of Textual Revision in Second Language Writing. Language Resources and Evaluation, 49(3):659–683. Chao-Lin Liu, Chun-Hung Wang, Zhao-Ming Gao, and Shang-Ming Huang. 2005. Applications of Lexical Information for Algorithmically Composing Multiple-Choice Cloze Items. In Proc. 2nd Workshop on Building Educational Applications Using NLP, pages 1–8. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP Natural Language Processing Toolkit. In Proc. ACL System Demonstrations, pages 55–60. Samuel Reese, Gemma Boleda, Montse Cuadros, Llu´ıs Padr´o, and German Rigau. 2010. Wikicorpus: A Word-Sense Disambiguated Multilingual Wikipedia Corpus. In Proc. LREC. Keisuke Sakaguchi, Yuki Arase, and Mamoru Komachi. 2013. Discriminative Approach to Fill-inthe-Blank Quiz Generation for Language Learners. In Proc. ACL. Chi-Chiang Shei. 2001. FollowYou!: An Automatic Language Lesson Generation System. Computer Assisted Language Learning, 14(2):129–144. Simon Smith, P. V. S. Avinesh, and Adam Kilgarriff. 2010. Gap-fill Tests for Language Learners: Corpus-Driven Item Generation. In Proc. 8th International Conference on Natural Language Processing (ICON). 992 Eiichiro Sumita, Fumiaki Sugaya, and Seiichi Yamamoto. 2005. Measuring Non-native Speakers Proficiency of English by Using a Test with Automatically-Generated Fill-in-the-Blank Questions. In Proc. 2nd Workshop on Building Educational Applications using NLP. Joel Tetreault and Martin Chodorow. 2008. The Ups and Downs of Preposition Error Detection in ESL Writing. In Proc. COLING. Peter Watcyn-Jones and Jake Allsop. 2000. Test Your Prepositions. Penguin Books Ltd. Torsten Zesch and Oren Melamud. 2014. Automatic Generation of Challenging Distractors Using Context-Sensitive Inference Rules. In Proc. Workshop on Innovative Use of NLP for Building Educational Applications (BEA). 993
2016
93
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 994–1003, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics A Persona-Based Neural Conversation Model Jiwei Li1* Michel Galley2 Chris Brockett2 Georgios P. Spithourakis3* Jianfeng Gao2 Bill Dolan2 1Stanford University, Dept. of Computer Science [email protected] 2Microsoft Research, Redmond {mgalley,chrisbkt,jfgao,billdol}@microsoft.com 3University College London, Dept. of Computer Science [email protected] Abstract We present persona-based models for handling the issue of speaker consistency in neural response generation. A speaker model encodes personas in distributed embeddings that capture individual characteristics such as background information and speaking style. A dyadic speakeraddressee model captures properties of interactions between two interlocutors. Our models yield qualitative performance improvements in both perplexity and BLEU scores over baseline sequence-to-sequence models, with similar gains in speaker consistency as measured by human judges. 1 Introduction As conversational agents gain traction as user interfaces, there has been growing research interest in training naturalistic conversation systems from large volumes of human-to-human interactions (Ritter et al., 2011; Sordoni et al., 2015; Vinyals and Le, 2015; Li et al., 2016). One major issue for these data-driven systems is their propensity to select the response with greatest likelihood—in effect a consensus response of the humans represented in the training data. Outputs are frequently vague or non-committal (Li et al., 2016), and when not, they can be wildly inconsistent, as illustrated in Table 1. In this paper, we address the challenge of consistency and how to endow data-driven systems with the coherent “persona” needed to model humanlike behavior, whether as personal assistants, per* The entirety of this work was conducted at Microsoft. message Where do you live now? response I live in Los Angeles. message In which city do you live now? response I live in Madrid. message In which country do you live now? response England, you? message Where were you born? response I was born in Canada. message Where are you from? response England, you? message Where did you grow up? response I grew up in Texas. message How old are you? response 16 and you? message What’s your age? response 18. message What is your major? response I’m majoring in psychology message What did you study in college? response English lit. Table 1: Inconsistent responses generated by a 4-layer SEQ2SEQ model trained on 25 million Twitter conversation snippets. sonalized avatar-like agents, or game characters.1 For present purposes, we will define PERSONA as the character that an artificial agent, as actor, plays or performs during conversational interactions. A persona can be viewed as a composite of elements of identity (background facts or user profile), language behavior, and interaction style. A persona is also adaptive, since an agent may need to present different facets to different human interlocutors depending on the interaction. Fortunately, neural models of conversation generation (Sordoni et al., 2015; Shang et al., 2015; Vinyals and Le, 2015; Li et al., 2016) provide a straightforward mechanism for incorporating personas as embeddings. We therefore explore two per1(Vinyals and Le, 2015) suggest that the lack of a coherent personality makes it impossible for current systems to pass the Turing test. 994 sona models, a single-speaker SPEAKER MODEL and a dyadic SPEAKER-ADDRESSEE MODEL, within a sequence-to-sequence (SEQ2SEQ) framework (Sutskever et al., 2014). The Speaker Model integrates a speaker-level vector representation into the target part of the SEQ2SEQ model. Analogously, the Speaker-Addressee model encodes the interaction patterns of two interlocutors by constructing an interaction representation from their individual embeddings and incorporating it into the SEQ2SEQ model. These persona vectors are trained on human-human conversation data and used at test time to generate personalized responses. Our experiments on an open-domain corpus of Twitter conversations and dialog datasets comprising TV series scripts show that leveraging persona vectors can improve relative performance up to 20% in BLEU score and 12% in perplexity, with a commensurate gain in consistency as judged by human annotators. 2 Related Work This work follows the line of investigation initiated by Ritter et al. (2011) who treat generation of conversational dialog as a statistical machine translation (SMT) problem. Ritter et al. (2011) represents a break with previous and contemporaneous dialog work that relies extensively on hand-coded rules, typically either building statistical models on top of heuristic rules or templates (Levin et al., 2000; Young et al., 2010; Walker et al., 2003; Pieraccini et al., 2009; Wang et al., 2011) or learning generation rules from a minimal set of authored rules or labels (Oh and Rudnicky, 2000; Ratnaparkhi, 2002; Banchs and Li, 2012; Ameixa et al., 2014; Nio et al., 2014; Chen et al., 2013). More recently (Wen et al., 2015) have used a Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) to learn from unaligned data in order to reduce the heuristic space of sentence planning and surface realization. The SMT model proposed by Ritter et al., on the other hand, is end-to-end, purely data-driven, and contains no explicit model of dialog structure; the model learns to converse from human-to-human conversational corpora. Progress in SMT stemming from the use of neural language models (Sutskever et al., 2014; Gao et al., 2014; Bahdanau et al., 2015; Luong et al., 2015) has inspired efforts to extend these neural techniques to SMT-based conversational response generation. Sordoni et al. (2015) augments Ritter et al. (2011) by rescoring outputs using a SEQ2SEQ model conditioned on conversation history. Other researchers have recently used SEQ2SEQ to directly generate responses in an end-to-end fashion without relying on SMT phrase tables (Serban et al., 2015; Shang et al., 2015; Vinyals and Le, 2015). Serban et al. (2015) propose a hierarchical neural model aimed at capturing dependencies over an extended conversation history. Recent work by Li et al. (2016) measures mutual information between message and response in order to reduce the proportion of generic responses typical of SEQ2SEQ systems. Yao et al. (2015) employ an intention network to maintain the relevance of responses. Modeling of users and speakers has been extensively studied within the standard dialog modeling framework (e.g., (Wahlster and Kobsa, 1989; Kobsa, 1990; Schatztnann et al., 2005; Lin and Walker, 2011)). Since generating meaningful responses in an open-domain scenario is intrinsically difficult in conventional dialog systems, existing models often focus on generalizing character style on the basis of qualitative statistical analysis (Walker et al., 2012; Walker et al., 2011). The present work, by contrast, is in the vein of the SEQ2SEQ models of Vinyals and Le (2015) and Li et al. (2016), enriching these models by training persona vectors directly from conversational data and relevant side-information, and incorporating these directly into the decoder. 3 Sequence-to-Sequence Models Given a sequence of inputs X = {x1, x2, ..., xnX}, an LSTM associates each time step with an input gate, a memory gate and an output gate, respectively denoted as it, ft and ot. We distinguish e and h where et denotes the vector for an individual text unit (for example, a word or sentence) at time step t while ht denotes the vector computed by the LSTM model at time t by combining et and ht−1. ct is the cell state vector at time t, and σ denotes the sigmoid function. Then, the vector representation ht for each time step t is given by:   it ft ot lt  =   σ σ σ tanh  W ·  ht−1 es t  (1) ct = ft · ct−1 + it · lt (2) hs t = ot · tanh(ct) (3) 995 where Wi, Wf, Wo, Wl ∈ RK×2K. In SEQ2SEQ generation tasks, each input X is paired with a sequence of outputs to predict: Y = {y1, y2, ..., ynY }. The LSTM defines a distribution over outputs and sequentially predicts tokens using a softmax function: p(Y |X) = ny Y t=1 p(yt|x1, x2, ..., xt, y1, y2, ..., yt−1) = ny Y t=1 exp(f(ht−1, eyt)) P y′ exp(f(ht−1, ey′)) where f(ht−1, eyt) denotes the activation function between ht−1 and eyt. Each sentence terminates with a special end-of-sentence symbol EOS. In keeping with common practices, inputs and outputs use different LSTMs with separate parameters to capture different compositional patterns. During decoding, the algorithm terminates when an EOS token is predicted. At each time step, either a greedy approach or beam search can be adopted for word prediction. 4 Personalized Response Generation Our work introduces two persona-based models: the Speaker Model, which models the personality of the respondent, and the Speaker-Addressee Model which models the way the respondent adapts their speech to a given addressee — a linguistic phenomenon known as lexical entrainment (Deutsch and Pechmann, 1982). 4.1 Notation For the response generation task, let M denote the input word sequence (message) M = {m1, m2, ..., mI}. R denotes the word sequence in response to M, where R = {r1, r2, ..., rJ, EOS} and J is the length of the response (terminated by an EOS token). rt denotes a word token that is associated with a K dimensional distinct word embedding et. V is the vocabulary size. 4.2 Speaker Model Our first model is the Speaker Model, which models the respondent alone. This model represents each individual speaker as a vector or embedding, which encodes speaker-specific information (e.g., dialect, register, age, gender, personal information) that influences the content and style of her responses. Note that these attributes are not explicitly annotated, which would be tremendously expensive for our datasets. Instead, our model manages to cluster users along some of these traits (e.g., age, country of residence) based on the responses alone. Figure 1 gives a brief illustration of the Speaker Model. Each speaker i ∈[1, N] is associated with a user-level representation vi ∈RK×1. As in standard SEQ2SEQ models, we first encode message S into a vector representation hS using the source LSTM. Then for each step in the target side, hidden units are obtained by combining the representation produced by the target LSTM at the previous time step, the word representations at the current time step, and the speaker embedding vi:   it ft ot lt  =   σ σ σ tanh  W ·   ht−1 es t vi   (4) ct = ft · ct−1 + it · lt (5) hs t = ot · tanh(ct) (6) where W ∈R4K×3K. In this way, speaker information is encoded and injected into the hidden layer at each time step and thus helps predict personalized responses throughout the generation process. The Speaker embedding {vi} is shared across all conversations that involve speaker i. {vi} are learned by back propagating word prediction errors to each neural component during training. Another useful property of this model is that it helps infer answers to questions even if the evidence is not readily present in the training set. This is important as the training data does not contain explicit information about every attribute of each user (e.g., gender, age, country of residence). The model learns speaker representations based on conversational content produced by different speakers, and speakers producing similar responses tend to have similar embeddings, occupying nearby positions in the vector space. This way, the training data of speakers nearby in vector space help increase the generalization capability of the speaker model. For example, consider two speakers i and j who sound distinctly British, and who are therefore close in speaker embedding space. Now, suppose that, in the training data, speaker i was asked Where do you live? and responded in the UK. Even if speaker j was never asked the same question, this answer can help influence a good response from speaker j, and this without explicitly labeled geo-location information. 996 EOS Rob Word embeddings (50k) england london u.s. great good stay live okay monday tuesday Speaker embeddings (70k) Rob_712 where do you live in in Rob england Rob england . Rob . EOS Source Target skinnyoflynny2 Tomcoatez Kush_322 D_Gomes25 Dreamswalls kierongillen5 TheCharlieZ The_Football_Bar This_Is_Artful DigitalDan285 Jinnmeow3 Bob_Kelly2 Figure 1: Illustrative example of the Speaker Model introduced in this work. Speaker IDs close in embedding space tend to respond in the same manner. These speaker embeddings are learned jointly with word embeddings and all other parameters of the neural model via backpropagation. In this example, say Rob is a speaker clustered with people who often mention England in the training data, then the generation of the token ‘england’ at time t = 2 would be much more likely than that of ‘u.s.’. A non-persona model would prefer generating in the u.s. if ‘u.s.’ is more represented in the training data across all speakers. 4.3 Speaker-Addressee Model A natural extension of the Speaker Model is a model that is sensitive to speaker-addressee interaction patterns within the conversation. Indeed, speaking style, register, and content does not vary only with the identity of the speaker, but also with that of the addressee. For example, in scripts for the TV series Friends used in some of our experiments, the character Ross often talks differently to his sister Monica than to Rachel, with whom he is engaged in an on-again off-again relationship throughout the series. The proposed Speaker-Addressee Model operates as follows: We wish to predict how speaker i would respond to a message produced by speaker j. Similarly to the Speaker model, we associate each speaker with a K dimensional speaker-level representation, namely vi for user i and vj for user j. We obtain an interactive representation Vi,j ∈RK×1 by linearly combining user vectors vi and vj in an attempt to model the interactive style of user i towards user j, Vi,j = tanh(W1 · vi + W2 · v2) (7) where W1, W2 ∈RK×K. Vi,j is then linearly incorporated into LSTM models at each step in the target:   it ft ot lt  =   σ σ σ tanh  W ·   ht−1 es t Vi,j   (8) ct = ft · ct−1 + it · lt (9) hs t = ot · tanh(ct) (10) Vi,j depends on both speaker and addressee and the same speaker will thus respond differently to a message from different interlocutors. One potential issue with Speaker-Addressee modelling is the difficulty involved in collecting a large-scale training dataset in which each speaker is involved in conversation with a wide variety of people. Like the Speaker Model, however, the SpeakerAddressee Model derives generalization capabilities from speaker embeddings. Even if the two speakers at test time (i and j) were never involved in the same conversation in the training data, two speakers i′ and j′ who are respectively close in embeddings may have been, and this can help modelling how i should respond to j. 4.4 Decoding and Reranking For decoding, the N-best lists are generated using the decoder with beam size B = 200. We set a maximum length of 20 for the generated candidates. Decoding operates as follows: At each time step, we first examine all B × B possible next-word candidates, and add all hypothesis ending with an EOS token to the N-best list. We then preserve the top-B unfinished hypotheses and move to the next word position. To deal with the issue that SEQ2SEQ models tend to generate generic and commonplace responses such as I don’t know, we follow Li et al. (2016) by reranking the generated N-best list using 997 a scoring function that linearly combines a length penalty and the log likelihood of the source given the target: log p(R|M, v) + λ log p(M|R) + γ|R| (11) where p(R|M, v) denotes the probability of the generated response given the message M and the respondent’s speaker ID. |R| denotes the length of the target and γ denotes the associated penalty weight. We optimize γ and λ on N-best lists of response candidates generated from the development set using MERT (Och, 2003) by optimizing BLEU. To compute p(M|R), we train an inverse SEQ2SEQ model by swapping messages and responses. We trained standard SEQ2SEQ models for p(M|R) with no speaker information considered. 5 Datasets 5.1 Twitter Persona Dataset Data Collection Training data for the Speaker Model was extracted from the Twitter FireHose for the six-month period beginning January 1, 2012. We limited the sequences to those where the responders had engaged in at least 60 (and at most 300) 3-turn conversational interactions during the period, in other words, users who reasonably frequently engaged in conversation. This yielded a set of 74,003 users who took part in a minimum of 60 and a maximum of 164 conversational turns (average: 92.24, median: 90). The dataset extracted using responses by these “conversationalists” contained 24,725,711 3-turn sliding-window (context-message-response) conversational sequences. In addition, we sampled 12000 3-turn conversations from the same user set from the Twitter FireHose for the three-month period beginning July 1, 2012, and set these aside as development, validation, and test sets (4000 conversational sequences each). Note that development, validation, and test sets for this data are single-reference, which is by design. Multiple reference responses would typically require acquiring responses from different people, which would confound different personas. Training Protocols We trained four-layer SEQ2SEQ models on the Twitter corpus following the approach of (Sutskever et al., 2014). Details are as follows: • 4 layer LSTM models with 1,000 hidden cells for each layer. • Batch size is set to 128. • Learning rate is set to 1.0. • Parameters are initialized by sampling from the uniform distribution [−0.1, 0.1]. • Gradients are clipped to avoid gradient explosion with a threshold of 5. • Vocabulary size is limited to 50,000. • Dropout rate is set to 0.2. Source and target LSTMs use different sets of parameters. We ran 14 epochs, and training took roughly a month to finish on a Tesla K40 GPU machine. As only speaker IDs of responses were specified when compiling the Twitter dataset, experiments on this dataset were limited to the Speaker Model. 5.2 Twitter Sordoni Dataset The Twitter Persona Dataset was collected for this paper for experiments with speaker ID information. To obtain a point of comparison with prior state-of-the-art work (Sordoni et al., 2015; Li et al., 2016), we measure our baseline (non-persona) LSTM model against prior work on the dataset of (Sordoni et al., 2015), which we call the Twitter Sordoni Dataset. We only use its test-set portion, which contains responses for 2114 context and messages. It is important to note that the Sordoni dataset offers up to 10 references per message, while the Twitter Persona dataset has only 1 reference per message. Thus BLEU scores cannot be compared across the two Twitter datasets (BLEU scores on 10 references are generally much higher than with 1 reference). Details of this dataset are in (Sordoni et al., 2015). 5.3 Television Series Transcripts Data Collection For the dyadic SpeakerAddressee Model we used scripts from the American television comedies Friends2 and The Big Bang Theory,3 available from Internet Movie Script Database (IMSDb).4 We collected 13 main characters from the two series in a corpus of 69,565 turns. We split the corpus into training/development/testing sets, with development and testing sets each of about 2,000 turns. Training Since the relatively small size of the dataset does not allow for training an open domain dialog model, we adopted a domain adaption strategy where we first trained a standard SEQ2SEQ 2https://en.wikipedia.org/wiki/Friends 3https://en.wikipedia.org/wiki/The_ Big_Bang_Theory 4http://www.imsdb.com 998 System BLEU MT baseline (Ritter et al., 2011) 3.60% Standard LSTM MMI (Li et al., 2016) 5.26% Standard LSTM MMI (our system) 5.82% Human 6.08% Table 2: BLEU on the Twitter Sordoni dataset (10 references). We contrast our baseline against an SMT baseline (Ritter et al., 2011), and the best result (Li et al., 2016) on the established dataset of (Sordoni et al., 2015). The last result is for a human oracle, but it is not directly comparable as the oracle BLEU is computed in a leave-one-out fashion, having one less reference available. We nevertheless provide this result to give a sense that these BLEU scores of 5-6% are not unreasonable. models using a much larger OpenSubtitles (OSDb) dataset (Tiedemann, 2009), and then adapting the pre-trained model to the TV series dataset. The OSDb dataset is a large, noisy, open-domain dataset containing roughly 60M-70M scripted lines spoken by movie characters. This dataset does not specify which character speaks each subtitle line, which prevents us from inferring speaker turns. Following Vinyals et al. (2015), we make the simplifying assumption that each line of subtitle constitutes a full speaker turn.5 We trained standard SEQ2SEQ models on OSDb dataset, following the protocols already described in Section 5.1. We run 10 iterations over the training set. We initialize word embeddings and LSTM parameters in the Speaker Model and the SpeakerAddressee model using parameters learned from OpenSubtitles datasets. User embeddings are randomly initialized from [−0.1, 0.1]. We then ran 5 additional epochs until the perplexity on the development set stabilized. 6 Experiments 6.1 Evaluation Following (Sordoni et al., 2015; Li et al., 2016) we used BLEU (Papineni et al., 2002) for parameter tuning and evaluation. BLEU has been shown to correlate well with human judgment on the response generation task, as demonstrated in (Galley et al., 2015). Besides BLEU scores, we also report perplexity as an indicator of model capability. 6.2 Baseline Since our main experiments are with a new dataset (the Twitter Persona Dataset), we first show that our LSTM baseline is competitive with the state-of5This introduces a degree of noise as consecutive lines are not necessarily from the same scene or two different speakers. Model Standard LSTM Speaker Model Perplexity 47.2 42.2 (−10.6%) Table 3: Perplexity for standard SEQ2SEQ and the Speaker model on the Twitter Persona development set. Model Objective BLEU Standard LSTM MLE 0.92% Speaker Model MLE 1.12% (+21.7%) Standard LSTM MMI 1.41% Speaker Model MMI 1.66% (+11.7%) Table 4: BLEU on the Twitter Persona dataset (1 reference), for the standard SEQ2SEQ model and the Speaker model using as objective either maximum likelihood (MLE) or maximum mutual information (MMI). the-art (Li et al., 2016) on an established dataset, the Twitter Sordoni Dataset (Sordoni et al., 2015). Our baseline is simply our implementation of the LSTM-MMI of (Li et al., 2016), so results should be relatively close to their reported results. Table 2 summarizes our results against prior work. We see that our system actually does better than (Li et al., 2016), and we attribute the improvement to a larger training corpus, the use of dropout during training, and possibly to the “conversationalist” nature of our corpus. 6.3 Results We first report performance on the Twitter Persona dataset. Perplexity is reported in Table 3. We observe about a 10% decrease in perplexity for the Speaker model over the standard SEQ2SEQ model. In terms of BLEU scores (Table 4), a significant performance boost is observed for the Speaker model over the standard SEQ2SEQ model, yielding an increase of 21% in the maximum likelihood (MLE) setting and 11.7% for mutual information setting (MMI). In line with findings in (Li et al., 2016), we observe a consistent performance boost introduced by the MMI objective function over a standard SEQ2SEQ model based on the MLE objective function. It is worth noting that our persona models are more beneficial to the MLE models than to the MMI models. This result is intuitive as the persona models help make Standard LSTM MLE outputs more informative and less bland, and thus make the use of MMI less critical. For the TV Series dataset, perplexity and BLEU scores are respectively reported in Table 5 and Table 6. As can be seen, the Speaker and SpeakerAddressee models respectively achieve perplexity values of 25.4 and 25.0 on the TV-series dataset, 999 Model Standard LSTM Speaker Model Speaker-Addressee Model Perplexity 27.3 25.4 (−7.0%) 25.0 (−8.4%) Table 5: Perplexity for standard SEQ2SEQ and persona models on the TV series dataset. Model Standard LSTM Speaker Model Speaker-Addressee Model MLE 1.60% 1.82% (+13.7%) 1.83% (+14.3%) MMI 1.70% 1.90% (+10.6%) 1.88% (+10.9%) Table 6: BLEU on the TV series dataset (1 reference), for the standard SEQ2SEQ and persona models. 7.0% and 8.4% percent lower than the correspondent standard SEQ2SEQ models. In terms of BLEU score, we observe a similar performance boost as on the Twitter dataset, in which the Speaker model and the Speaker-Addressee model outperform the standard SEQ2SEQ model by 13.7% and 10.6%. By comparing the Speaker-Addressee model against the Speaker model on the TV Series dataset, we do not observe a significant difference. We suspect that this is primarily due to the relatively small size of the dataset where the interactive patterns might not be fully captured. Smaller values of perplexity are observed for the Television Series dataset than the Twitter dataset, the perplexity of which is over 40, presumably due to the noisier nature of Twitter dialogues. 6.4 Qualitative Analysis Diverse Responses by Different Speakers Table 7 represents responses generated by persona models in response to three different input questions. We randomly selected 10 speakers (without cherry-picking) from the original Twitter dataset. We collected their user level representations from a speaker look-up table and integrated them into the decoding models. The model tends to generate specific responses for different people in response to the factual questions.6 Table 8 shows responses generated from the Speaker-Addressee Model using the TV-series dataset. Interestingly, we regularly observe that this model is sensitive to the identity of the addressee, generating words specifically targeted at that addressee (e.g., her name). For example, the model produces Of course, I love you, Emily in response to an input from Emily. Also, the model generates Of course I love you. ( kisses him), where the pronoun “him” accurately identifies the gender of the addressee. 6There appears to be a population bias in the training set that favors British users. message Where is your hometown? baseline I was born in Canada. user1 I’m from England. user2 I’m from Manchester. user3 I’m from Liverpool. user4 England. You? user5 My hometown. user6 I’m from Texas. user7 I’m from LA. user8 England. user9 I was born here. user10 I was born in the us. message What company do you work for? baseline I work for a company. user1 I don’t have a job. user2 I’m a manager. user3 I’m working. # user4 I work for a company. user5 I’m working. # user6 Customer service. user7 I work in retail. user8 I work in a cafe. user9 I work at a gas station. user10 I don’t know # message What did you have for dinner? baseline I had fish and chips. user1 I had spag bol. user2 Chicken and chips. user3 Chicken and rice. user4 Fish and chips. user5 I had spag bol. user6 I had Mexican food. user7 Salad... user8 I had chicken and chips. user9 I had spag bol. user10 Pizza. Table 7: Responses generated by the baseline (LSTM-MMI) and the Speaker Model for ten randomly selected users, without cherry picking. # indicates poor-quality responses produced by the system. Human Evaluation We conducted a human evaluation of outputs from the Speaker Model, using a crowdsourcing service. Since we cannot expect crowdsourced human judges to know or attempt to learn the ground truth of Twitter users who are not well-known public figures, we designed our experiment to evaluate the consistency of outputs associated with the speaker IDs. To this end, we collected 24 pairs of questions for which we would expect 1000 message Do you love me? baseline Of course, I love you. addressee: Emily, speaker: Sheldon response Of course, I love you, Emily. addressee: Emily, speaker: Penny response Of course, I love you, Emily. addressee: Leonard, speaker: Penny response Of course, I love you, Leonard. addressee: Monica, speaker: Chandler response Of course I love you. I want to marry you. addressee: Rachel, speaker: Chandler response Of course I love you. addressee: Ross, speaker: Chandler response Of course I love you. addressee: Ross, speaker: Rachel response Of course I love you. (kisses him) addressee: Emily, speaker: Rachel response Of course I love you. Table 8: Responses to Do you love me? from the SpeakerAddressee model on the TV-series dataset using different addressees and speakers. responses to be consistent if the persona model is coherent. For example, responses to the questions What country do you live in? and What city do you live in? would be considered consistent if the answers were England and London respectively, but not if they were UK and Chicago. Similarly, the responses to Are you vegan or vegetarian? and Do you eat beef? are consistent if the answers generated are vegan and absolutely not, but not if they are vegan and I love beef. We collected 20 pairs of outputs for randomly-selected personas provided by the Speaker Model for each question pair (480 response pairs total). We also obtained the corresponding outputs from the baseline MMI-enhanced SEQ2SEQ system. Since our purpose is to measure the gain in consistency over the baseline system, we presented the pairs of answers system-pairwise, i.e., 4 responses, 2 from each system, displayed on the screen, and asked judges to decide which of the two systems was more consistent. The position in which the system pairs were presented on the screen was randomized. The two systems were judged on 5-point zero-sum scale, assigning a score of 2 (-2) if one system was judged more (less) consistent than the other, and 1 (-1) if one was rated “somewhat” more (less) consistent. Ties were assigned a score of zero. Five judges rated each pair and their scores were averaged and remapped into 5 equal-width bins. After discarding ties, we found the persona model was judged either “more consistent” or “somewhat more consistent” in 56.7% of cases. If we ignore the “somewhat more consistent” judgments, the persona model wins in 6.1% of cases, compared User1 message Where is your hometown? response I’m from England. message Where are you from? response I’m from England. message In which city do you live now? response I live in London. message In which country do you live now? response I live in England. message What is your major? response Business. You? message What did you study in college? response I did business studies. message How old are you? response I’m 18. message What is your age? response I’m 18. User2 message Where is your hometown? response I live in Indonesia. message Where are you from? response England, you? message In which city do you live now? response I live in Jakarta. message In which country do you live now? response I live in Indonesia. message What is your major? response Business, you? message What did you study in college? response Psychology, you? message How old are you? response I’m 18. message What is your age? response I’m 16. Table 9: Examples of speaker consistency and inconsistency generated by the Speaker Model with only 1.6% for the baseline model. It should be emphasized that the baseline model is a strong baseline, since it represents the consensus of all 70K Twitter users in the dataset7. Table 9 illustrates how consistency is an emergent property of two arbitrarily selected users. The model is capable of discovering the relations between different categories of location such as London and the UK, Jakarta and Indonesia. However, the model also makes inconsistent response decisions, generating different answers in the second example in response to questions asking about age or major. Our proposed persona models integrate user embeddings into the LSTM, and thus can be viewed as encapsulating a trade-off between a persona-specific generation model and a general conversational model. 7I’m not pregnant is an excellent consensus answer to the question Are you pregnant?, while I’m pregnant is consistent as a response only in the case of someone who also answers the question Are you a guy or a girl? with something in the vein of I’m a girl. 1001 7 Conclusions We have presented two persona-based response generation models for open-domain conversation generation. There are many other dimensions of speaker behavior, such as mood and emotion, that are beyond the scope of the current paper and must be left to future work. Although the gains presented by our new models are not spectacular, the systems outperform our baseline SEQ2SEQ systems in terms of BLEU, perplexity, and human judgments of speaker consistency. We have demonstrated that by encoding personas in distributed representations, we are able to capture personal characteristics such as speaking style and background information. In the SpeakerAddressee model, moreover, the evidence suggests that there is benefit in capturing dyadic interactions. Our ultimate goal is to be able to take the profile of an arbitrary individual whose identity is not known in advance, and generate conversations that accurately emulate that individual’s persona in terms of linguistic response behavior and other salient characteristics. Such a capability will dramatically change the ways in which we interact with dialog agents of all kinds, opening up rich new possibilities for user interfaces. Given a sufficiently large training corpus in which a sufficiently rich variety of speakers is represented, this objective does not seem too far-fetched. Acknowledgments We with to thank Stephanie Lukin, Pushmeet Kohli, Chris Quirk, Alan Ritter, and Dan Jurafsky for helpful discussions. References David Ameixa, Luisa Coheur, Pedro Fialho, and Paulo Quaresma. 2014. Luke, I am your father: dealing with out-of-domain requests by using movies subtitles. In Intelligent Virtual Agents, pages 13–21. Springer. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. of the International Conference on Learning Representations (ICLR). Rafael E Banchs and Haizhou Li. 2012. IRIS: a chatoriented dialogue system based on the vector space model. In Proc. of the ACL 2012 System Demonstrations, pages 37–42. Yun-Nung Chen, Wei Yu Wang, and Alexander Rudnicky. 2013. An empirical investigation of sparse log-linear models for improved dialogue act classification. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pages 8317–8321. IEEE. Werner Deutsch and Thomas Pechmann. 1982. Social interaction and the development of definite descriptions. Cognition, 11:159–184. Michel Galley, Chris Brockett, Alessandro Sordoni, Yangfeng Ji, Michael Auli, Chris Quirk, Margaret Mitchell, Jianfeng Gao, and Bill Dolan. 2015. ∆BLEU: A discriminative metric for generation tasks with intrinsically diverse targets. In Proc. of ACL-IJCNLP, pages 445–450, Beijing, China, July. Jianfeng Gao, Xiaodong He, Wen-tau Yih, and Li Deng. 2014. Learning continuous phrase representations for translation modeling. In Proc. of ACL, pages 699–709, Baltimore, Maryland. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Alfred Kobsa. 1990. User modeling in dialog systems: Potentials and hazards. AI & society, 4(3):214–231. Esther Levin, Roberto Pieraccini, and Wieland Eckert. 2000. A stochastic model of human-machine interaction for learning dialog strategies. IEEE Transactions on Speech and Audio Processing, 8(1):11–23. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Proc. of NAACL-HLT. Grace I Lin and Marilyn A Walker. 2011. All the world’s a stage: Learning character models from film. In Proceedings of the Seventh AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE). Thang Luong, Ilya Sutskever, Quoc Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Addressing the rare word problem in neural machine translation. In Proc. of ACL, pages 11–19, Beijing, China, July. Lasguido Nio, Sakriani Sakti, Graham Neubig, Tomoki Toda, Mirna Adriani, and Satoshi Nakamura. 2014. Developing non-goal dialog system based on examples of drama television. In Natural Interaction with Robots, Knowbots and Smartphones, pages 355–361. Springer. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 160–167, Sapporo, Japan, July. Association for Computational Linguistics. 1002 Alice H Oh and Alexander I Rudnicky. 2000. Stochastic language generation for spoken dialogue systems. In Proceedings of the 2000 ANLP/NAACL Workshop on Conversational systems-Volume 3, pages 27–32. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proc. of ACL, pages 311–318. Roberto Pieraccini, David Suendermann, Krishna Dayanidhi, and Jackson Liscombe. 2009. Are we there yet? research in commercial spoken dialog systems. In Text, Speech and Dialogue, pages 3–13. Springer. Adwait Ratnaparkhi. 2002. Trainable approaches to surface natural language generation and their application to conversational dialog systems. Computer Speech & Language, 16(3):435–455. Alan Ritter, Colin Cherry, and William B Dolan. 2011. Data-driven response generation in social media. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 583– 593. Jost Schatztnann, Matthew N Stuttle, Karl Weilhammer, and Steve Young. 2005. Effects of the user model on simulation-based learning of dialogue strategies. In Automatic Speech Recognition and Understanding, 2005 IEEE Workshop on, pages 220– 225. Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2015. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proc. of AAAI. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In ACL-IJCNLP, pages 1577–1586. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Meg Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. In Proc. of NAACL-HLT. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems (NIPS), pages 3104–3112. J¨org Tiedemann. 2009. News from OPUS – a collection of multilingual parallel corpora with tools and interfaces. In Recent advances in natural language processing, volume 5, pages 237–248. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. In Proc. of ICML Deep Learning Workshop. Wolfgang Wahlster and Alfred Kobsa. 1989. User models in dialog systems. Springer. Marilyn A Walker, Rashmi Prasad, and Amanda Stent. 2003. A trainable generator for recommendations in multimodal dialog. In INTERSPEECH. Marilyn A Walker, Ricky Grant, Jennifer Sawyer, Grace I Lin, Noah Wardrip-Fruin, and Michael Buell. 2011. Perceived or not perceived: Film character models for expressive nlg. In Interactive Storytelling, pages 109–121. Springer. Marilyn A Walker, Grace I Lin, and Jennifer Sawyer. 2012. An annotated corpus of film dialogue for learning and characterizing character style. In LREC, pages 1373–1378. William Yang Wang, Ron Artstein, Anton Leuski, and David Traum. 2011. Improving spoken dialogue understanding using phonetic mixture models. In FLAIRS Conference. Tsung-Hsien Wen, Milica Gasic, Nikola Mrkˇsi´c, PeiHao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned LSTM-based natural language generation for spoken dialogue systems. In Proc. of EMNLP, pages 1711–1721, Lisbon, Portugal, September. Association for Computational Linguistics. Kaisheng Yao, Geoffrey Zweig, and Baolin Peng. 2015. Attention with intention for a neural network conversation model. CoRR, abs/1510.08565. Steve Young, Milica Gaˇsi´c, Simon Keizer, Franc¸ois Mairesse, Jost Schatzmann, Blaise Thomson, and Kai Yu. 2010. The hidden information state model: A practical framework for pomdp-based spoken dialogue management. Computer Speech & Language, 24(2):150–174. 1003
2016
94
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1004–1013, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Discriminative Deep Random Walk for Network Classification Juzheng Li, Jun Zhu, Bo Zhang Dept. of Comp. Sci. & Tech., State Key Lab of Intell. Tech. & Sys. Tsinghua University, Beijing, 100084, China [email protected]; {dcszj,dcszb}@tsinghua.edu.cn Abstract Deep Random Walk (DeepWalk) can learn a latent space representation for describing the topological structure of a network. However, for relational network classification, DeepWalk can be suboptimal as it lacks a mechanism to optimize the objective of the target task. In this paper, we present Discriminative Deep Random Walk (DDRW), a novel method for relational network classification. By solving a joint optimization problem, DDRW can learn the latent space representations that well capture the topological structure and meanwhile are discriminative for the network classification task. Our experimental results on several real social networks demonstrate that DDRW significantly outperforms DeepWalk on multilabel network classification tasks, while retaining the topological structure in the latent space. DDRW is stable and consistently outperforms the baseline methods by various percentages of labeled data. DDRW is also an online method that is scalable and can be naturally parallelized. 1 Introduction Categorization is an important task in natural language processing, especially with the growing scale of documents in the Internet. As the documents are often not isolated, a large amount of the linguistic materials present a network structure such as citation, hyperlink and social networks. The large size of networks calls for scalable machine learning methods to analyze such data. Recent efforts have been made in developing statistical models for various network analysis tasks, such as network classification (Neville and Jensen, 2000), content recommendation (Fouss et al., 2007), link prediction (Adamic and Adar, 2003), and anomaly detection (Savage et al., 2014). One common challenge of statistical network models is to deal with the sparsity of networks, which may prevent a model from generalizing well. One effective strategy to deal with network sparsity is to learn a latent space representation for the entities in a network (Hoff et al., 2002; Zhu, 2012; Tang and Liu, 2011; Tang et al., 2015). Among various approaches, DeepWalk (Perozzi et al., 2014) is a recent method that embeds all the entities into a continuous vector space using deep learning methods. DeepWalk captures entity features like neighborhood similarity and represents them by Euclidean distances (See Figure 1(b)). Furthermore, since entities that have closer relationships are more likely to share the same hobbies or belong to the same groups, such an embedding by DeepWalk can be useful for network classification, where the topological information is explored to encourage a globally consistent labeling. Although DeepWalk is effective on learning embeddings of the topological structure, when dealing with a network classification task, it lacks a mechanism to optimize the objective of the target task and thus often leads to suboptimal embeddings. In particular, for our focus of relational network classification, we would like the embeddings to be both representing the topological structure of the network actors and discriminative in predicting the class labels of actors. To address the above issues, we present Discriminative Deep Random Walk (DDRW) for relational network classification. DDRW extends DeepWalk by jointly optimizing the classification objective and the objective of embedding entities in a latent space that maintains the topological structure. Under this joint learning framework, DDRM manages to learn the latent representations 1004 (a) Karate Graph −1.2 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 (b) DeepWalk Embedding −1.5 −1 −0.5 0 0.5 1 1.1 1.2 1.3 1.4 1.5 1.6 (c) DDRW Embedding Figure 1: Different experimental results of embedding a network into a two dimensional real space. We use Karate Graph (Macskassy and Provost, 1977) for this example. Four different colors stand for the classes of the vertices. In (b), vertices which have stronger relations in the network are more likely to be closer in the embedding latent space. While in (c), besides the above-mentioned property, DDRW makes vertices in different classes more separated. that are strongly associated with the class labels (See Figure 1(c)), making it easy to find a separating boundary between the classes, and the actors that are connected in the original network are still close to each other in the latent social space. This idea of combining task-specific and representation objectives has been widely explored in other regions such as MedLDA (Zhu et al., 2012) and Supervised Dictionary Learning (Mairal et al., 2009). Technically, to capture the topological structure, we follow the similar idea of DeepWalk by running truncated random walks on the original network to extract sequences of actors, and then building a language model (i.e., Word2Vec (Mikolov et al., 2013b)) to project the actors into a latent space. To incorporate the supervising signal in network classification, we build a classifier based on the latent space representations. By sharing the same latent social space, the two objectives are strongly coupled and the latent social space is guided by both the network topology and class labels. DDRW optimizes the joint objective by using stochastic gradient descent, which is scalable and embarrassingly parallizable. We evaluate the performance on several realworld social networks, including BlogCatalog, Flickr and YouTube. Our results demonstrate that DDRW significantly boosts the classification accuracy of DeepWalk in multi-label network classification tasks, while still retaining the topological structure in the learnt latent social space. We also show that DDRW is stable and consistently outperforms the baseline methods by various percentages of labeled data. Although the networks we use only bring topological information for clarity, DDRW is flexible to consider additional attributes (if any) of vertices. For example, DDRW can be naturally extended to classify documents/webpages, which are often represented as a network (e.g., citation/hyperlink network), by conjoining with a word2vec component to embed the documents/webpages into the same latent space, similar as previous work on extending DeepWalk to incorporate attributes (Yang et al., 2015). 2 Problem Definition We consider the network classification problem, which classifies entities from a given network into one or more categories from a set Y. Let G = (V, E, Y ) denote a network, where V is the set of vertices, representing the entities of the network; E ⊆(V × V ) is the set of edges, representing the relations between the entities; and Y ⊆R|V |×|Y| denotes the labels of entities. We also consider YU as a set of unknown labels in the same graph G. The target of the classification task is to learn a model from labeled data and generate a label set YP to be the prediction of YU. The difference between YP and YU indicates the classification quality. When classifying elements X ∈Rn, traditional machine learning methods learn a weight matrix H to minimize the difference between YP = F(X, H) and YU, where F is any known fixed function. In network aspect, we will be able to utilize well-developed machine learning methods if adequate information of G is embedded into a corresponding form as X. By this motivation, relational learning (Getoor and Taskar, 2007; Neville and Jensen, 2000) methods are pop1005 ularly employed. In network classification, the internal structure of a network is resolved to extract the neighboring features of the entities (Macskassy and Provost, 2007; Wang and Sukthankar, 2013). Accordingly, the core problem is how to describe the irregular networks within formal feature spaces. A variety of approaches have been proposed with the purpose of finding effective statistical information through the network (Gallagher and Eliassi-Rad, 2008; Henderson et al., 2011; Tang and Liu, 2011). DeepWalk (Perozzi et al., 2014) is an outstanding method for network embedding, which uses truncated random walks to capture the explicit structure of the network and applies language models to learn the latent relationships between the actors. When applied to the network classification task, DeepWalk first learns X which describes the topological structure of G and then learns a subsequent classifier H. One obvious shortcoming of this two-step procedure is that the embedding step is unaware of the target class label information and likely to learn embeddings that are suboptimal for classification. We present Discriminative Deep Random Walk (DDRW) to enhance the effect of DeepWalk by learning X ∈R|V |×d and H ∈Rd×|Y| jointly. By using topological and label information of a certain network simultaneously, we will show that DDRW improves the classification accuracy significantly compared with most recent related methods. Furthermore, we will also show that the embedded result X produced by DDRW is able to retain the structure of G well. 3 Discriminative Deep Random Walk In this section, we present the details of Discriminative Deep Random Walk (DDRW). DDRW has both embedding and classification objectives. We optimize the two objectives jointly to learn latent representations that are strongly associated with the class labels in the latent space. We use stochastic gradient descent (Mikolov et al., 1991) as our optimization method. 3.1 Embedding Objective Let θ = (θ1, θ2, . . . , θ|V |) denote the embedded vectors in the latent space, and α denote the topological structure of the graph. The embedding objective can be described as an optimization prob4 9 18 12 3 11 5 16 … Wi : …4 16 18 3 5 … Wi+1 : …16 12 11 5 9 18… … Figure 2: A part of Random Walk process in an undirected graph. Every time an adjacent vertex is chosen randomly (no matter visited or not) as the arrows indicate, until reaching the maximum length s. lem as follows: min θ Lr(θ, α), (1) where Lr indicates the difference between the embedded representations θ and original topological structure α. For this objective, we use truncated random walks to capture the topological structure of the graph and the language model Word2Vec (Mikolov et al., 2013b) to learn the latent representations. Below, we explain each in turn. 3.1.1 Random Walk Random Walk has been used in different regions in network analysis to capture the topological structure of graphs (Fouss et al., 2007; Andersen et al., 2006). As the name suggests, Random Walk chooses a certain vertex in the graph for the first step and then randomly migrates through the edges. Truncated random walk defines a maximum length s for all walk streams. In our implementation, we shuffle the whole vertices V in the graph for τ times to build the sample set W. After each time of shuffling, we take the permutation list of vertices as the starting points of walks. Every time a walk stream starts at one element in order, randomly chooses an adjacent vertex to move, and ends when this stream reaches s vertices. By this procedure we get totally 1006 τ|V | samples (i.e. walk streams) from the graph. Thus our sample set W ∈Rτ|V |×s is obtained as the training materials. 3.1.2 Word2Vec Existing work has shown that both the vertices in truncated random walks and the words in text articles follow similar power-law distributions in frequency, and then the idea of reshaping a social network into a form of corpus is very straightforward (Perozzi et al., 2014). Corresponding to linguistic analysis region, the objective is to find an embedding for a corpus to show the latent significances between the words. Words which have closer meanings are more likely to be embedded into near positions. Word2Vec (Mikolov et al., 2013b) is an appropriate tool for this problem. We use the Skip-gram (Mikolov et al., 2013a) strategy in Word2Vec, which uses the central word in a sliding window with radius R to predict other words in the window and make local optimizations. Specifically, let ω = rw(α) denote the full walk streams obtained from truncated random walks in Section 3.1.1. Then by Skip-gram we can get the objective function Lr(θ, α) = − τ X i=1 1 s s X t=1 X −R≤j≤R,j̸=0 log p(ωi,t+j|ωi,j). (2) The standard Skip-gram method defines p(ωi,t+j|ωi,j) in Eq.(2) as follows: p(ωO|ωI) = exp(θT ωO ˆθωI) P|V | i=1 exp(θT i ˆθωI) , (3) where ˆθi and θi are the input and output representations of the ith vertex, respectively. One shortcoming of the standard form is that the summation in Eq.(3) is very inefficient. To reduce the time consumption, we use the Hierarchical Softmax (Mnih and Hinton, 2009; Morin and Bengio, 2005) which is included in Word2Vec packages∗. In Hierarchical Softmax, the Huffman binary tree is employed as an alternative representation for the vocabulary. The gradient descent step will be faster thanks to the Huffman tree structure which allows a reduction of output units necessarily evaluated. ∗https://code.google.com/archive/p/word2vec/ 3.2 Classification Objective Let y = (y1, y2, . . . , y|V |) denote the labels, and β denote the subsequent classifier. The classification objective can be described as an optimization problem: min θ,β Lc(θ, β, y). (4) In DDRW, we use existing classifiers and do not attempt to extend them. Although SVMmulticalss (Crammer and Singer, 2002) often shows good performance in multi-class tasks empirically, we choose the classifier being referred to as L2-regularized and L2-loss Support Vector Classification (Fan et al., 2008) to keep pace with the baseline methods to be mentioned in Section 4. In L2-regularized and L2-loss SVC, the loss function is Lc(θ, β, y) =C |V | X i=1 (σ(1 −yiβT θi))2 + 1 2βT β, (5) where C is the regularization parameter, σ(x) = x if x > 0 and σ(x) = 0 otherwise. Eq.(5) is for binary classification problems, and is extended to multi-class problems following the one-againstrest strategy (Fan et al., 2008). 3.3 Joint Learning The main target of our method is to classify the unlabeled vertices in the given network. We achieve this target with the help of intermediate embeddings which latently represent the network structure. We simultaneously optimize two objectives in Section 3.1 and 3.2. Specifically, let L(θ, β, α, y) = ηLr(θ, α) + Lc(θ, β, y), where η is a key parameter that balances the weights of the two objectives. We solve the joint optimization problem: min θ,β L(θ, β, α, y). (6) We use stochastic gradient descent (Mikolov et al., 1991) to solve the optimization problem in Eq.(6). In each gradient descent step, we have θ ←θ −δ∂L ∂θ = θ −δ(η∂Lr ∂θ + ∂Lc ∂θ ), β ←β −δ∂L ∂β = β −δ∂Lc ∂β , (7) where δ is the learning rate for stochastic gradient descent. In our implementation, δ is initially set to 1007 0.025 and linearly decreased with the steps, same as the default setting of Word2Vec. The derivatives in Eq.(7) are estimated by local slopes. In Eq.(7), the latent representations adjust themselves according to both topological information (∂Lr/∂θ) and label information (∂Lc/∂θ). This process intuitively makes vertices in the same class closer and those in different classes farther, and this is also proved by experiments (See Figure 1). Thus by joint learning, DDRW can learn the latent space representations that well capture the topological structure and meanwhile are discriminative for the network classification task. We take each sample Wi from walk streams W to estimate the local derivatives of the loss function for a descent step. Stochastic gradient descent enables DDRW to be an online algorithm, and thus our method is easy to be parallelized. Besides, a vertex may repeatedly appear for numerous times in W produced by random walks. This repeat is superfluous for classifiers and there is a considerable possibility to arise overfitting. Inspired from DropOut (Hinton et al., 2012) ideas, we randomly ignore the label information to control the optimization process in an equilibrium state. 4 Experimental Setup In this section we present an overview of the datasets and baseline methods which we will compare with in the experiments. 4.1 Datasets We use three popular social networks, which are exactly same with those used in some of the baseline methods. Table 1 summarizes the statistics of the data. • BlogCatalog: a network of social relationships provided by blog authors. The labels of this graph are the topics specified by the uploading users. • Flickr: a network of the contacts between users of the Flickr photo sharing website. The labels of this graph represent the interests of users towards certain categories of photos. • YouTube: a network between users of the Youtube video sharing website. The labels stand for the groups of the users interested in different types of videos. Dataset BlogCatalog Flickr YouTube Actors |V | 10,312 80,513 1,138,499 Links |E| 333,983 5,899,882 2,990,443 Labels |Y| 29 195 47 Sparsity 6.3× 10-3 1.8× 10-3 4.6× 10-6 Max Degree 3,992 5,706 28,754 Average Degree 65 146 5 Table 1: Statistics of the three networks. Sparsity indicates the ratio of the actual links and links in a complete graph. 4.2 Baseline Methods We evaluate our proposed method by comparing it with some significantly related methods. • LINE (Tang et al., 2015)†: This method takes the edges of a graph as samples to train the first-order and second-order proximity seprately and integrate the results as an embedding of the graph. This method can handle both graphs with unweighted and weighted and is especially efficient in large networks. • DeepWalk (Perozzi et al., 2014): This method employs language models to learn latent relations between the vertices in the graph. The basic assumption is that the closer two vertices are in the embedding space, the deeper relationships they have and there is higher possibility that they are in the same categories. • SpectralClustering (Tang and Liu, 2011): This method finds out that graph cuts are useful for the classification task. This idea is implemented by finding the eigenvectors of a normalized graph Laplacian of the original graph. • EdgeCluster (Tang and Liu, 2009b): This method uses k-means clustering algorithm to segment the edges of the graph into pieces. Then it runs iterations on the small clusters to find the internal relationships separately. The core idea is to scale time-consuming work into tractable sizes. • Majority: This baseline method simply chooses the most frequent labels. It does not use any structural information of the graph. †Although LINE also uses networks from Flickr and YouTube in its experiments, the networks are different from this paper. 1008 As the datasets are not only multi-class but also multi-label, we usually need a thresholding method to test the results. But literature gives a negative opinion of arbitrarily choosing thresholding methods because of the considerably different performances. To avoid this, we assume that the number of the labels is already known in all the test processes. 5 Experiments In this section, we present the experimental results and analysis on both network classification and latent space learning. We thoroughly evaluate the performance on the three networks and analyze the sensitivity to key parameters. 5.1 Classification Task We first represent the results on multi-class classification and compare with the baseline methods. To have a direct and fair comparison, we use the same data sets, experiment procedures and testing points as in the reports of our relevant baselines (Perozzi et al., 2014; Tang and Liu, 2011; Tang and Liu, 2009b). The training set of a specified graph consists of the vertices, the edges and the labels of a certain percentage of labeled vertices. The testing set consists of the rest of the labels. We employ Macro-F1 and Micro-F1 (Yang, 1999) as our measurements. Micro-F1 computes F1 score globally while Macro-F1 caculates F1 score locally and then average them globally. All the results reported are averaged from 10 repeated processes. 5.1.1 BlogCatalog BlogCatalog is the smallest dataset among the three. In BlogCatalog we vary the percentage of labeled data from 10% to 90%. Our results are presented in Table 2. We can see that DDRW performs consistently better than all the baselines on both Macro-F1 and Micro-F1 with the increasing percentage of labeled data. When compared with DeepWalk, DDRW obtains larger improvement when the percentage of labeled nodes is high. This improvement demonstrates the significance of DDRW on learning discriminative latent embeddings that are good for classification tasks. 5.1.2 Flickr Flickr is a larger dataset with quite a number of classes. In this experiment we vary the percentage of labeled data from 1% to 10%. Our results are presented in Table 3. We can see that DDRW still performs better than the baselines significantly on both Macro-F1 and Micro-F1, and the results are consistent with what in BlogCatalog. 5.1.3 YouTube YouTube is an even larger dataset with fewer classes than Flickr. In YouTube we vary the percentage of labeled data from 1% to 10%. Our results are presented in Table 4. In YouTube, LINE shows its strength in large sparse networks, probably because the larger scale of samples reduces the discrepancy from actual distributions. But from a general view, DDRW still performs better at most of the test points thanks to the latent representations when links are not sufficient. 5.2 Parameter Sensitivity We now present an analysis of the sensitivity with respect to several important parameters. We measure our method with changing parameters to evaluate its stability. Despite the parameters which are unilateral to classification performance, the two main bidirectional parameters are η and the dimension d of embedding space in different percentages of labeled data. We use BlogCatalog and Flickr networks for the experiments, and fix parameters of random walks (τ = 30, s = 40, R = 10). We do not represent the effects of changing parameters of random walks because results usually show unilateral relationships with them. 5.2.1 Effect of η The key parameter η in our algorithm adjusts the weights of two objectives (Section 3.3). We represent the effect of changing η in Figure 3(a) and 3(b). We fix d = 128 in these experiments. Although rapid gliding can be observed on either sides, there are still sufficient value range where DDRW keeps the good performance. These experiments also show that η is not very sensitive towards the percentage of labeled data. 5.2.2 Effect of Dimensionality We represent the effect of changing dimension d of the embedding space in Figure 3(c) and 3(d). We fix η = 1.0 in these experiments. There is decline when the dimension is high, but this decrease is not very sharp. Besides, when the dimension is high, the percentage of labeled data has more effect on the performance. 1009 Labeled Nodes 10% 20% 30% 40% 50% 60% 70% 80% 90% Micro-F1(%) DDRW 37.13 39.31 41.08 41.76 42.64 43.17 43.80 44.11 44.79 LINE 35.42 37.89 39.71 40.62 41.46 42.09 42.55 43.26 43.68 DeepWalk 36.00 38.20 39.60 40.30 41.00 41.30 41.50 41.50 42.00 SpecClust 31.06 34.95 37.27 38.93 39.97 40.99 41.66 42.42 42.62 EdgeClust 27.94 30.76 31.85 32.99 34.12 35.00 34.63 35.99 36.29 Majority 16.51 16.66 16.61 16.70 16.91 16.99 16.92 16.49 17.26 Macro-F1(%) DDRW 21.69 24.33 26.28 27.78 28.76 29.53 30.47 31.40 32.04 LINE 20.98 23.44 24.91 26.06 27.19 27.89 28.43 29.10 29.45 DeepWalk 21.30 23.80 25.30 26.30 27.30 27.60 27.90 28.20 28.90 SpecClust 19.14 23.57 25.97 27.46 28.31 29.46 30.13 31.38 31.78 EdgeClust 16.16 19.16 20.48 22.00 23.00 23.64 23.82 24.61 24.92 Majority 2.52 2.55 2.52 2.58 2.58 2.63 2.61 2.48 2.62 Table 2: Multi-class classification results in BlogCatalog. Labeled Nodes 1% 2% 3% 4% 5% 6% 7% 8% 9% 10% Micro-F1(%) DDRW 33.61 35.20 36.72 37.43 38.31 38.89 39.33 39.64 39.85 40.02 LINE 31.65 33.98 35.46 36.63 37.53 38.20 38.47 38.74 39.07 39.25 DeepWalk 32.40 34.60 35.90 36.70 37.20 37.70 38.10 38.30 38.50 38.70 SpecClust 27.43 30.11 31.63 32.69 33.31 33.95 34.46 34.81 35.14 35.41 EdgeClust 25.75 28.53 29.14 30.31 30.85 31.53 31.75 31.76 32.19 32.84 Majority 16.34 16.31 16.34 16.46 16.65 16.44 16.38 16.62 16.67 16.71 Macro-F1(%) DDRW 14.49 17.81 20.05 21.40 22.91 23.84 25.12 25.79 26.28 26.43 LINE 13.69 17.77 19.88 21.07 22.36 23.62 24.78 25.11 25.69 25.90 DeepWalk 14.00 17.30 19.60 21.10 22.10 22.90 23.60 24.10 24.60 25.00 SpecClust 13.84 17.49 19.44 20.75 21.60 22.36 23.01 23.36 23.82 24.05 EdgeClust 10.52 14.10 15.91 16.72 18.01 18.54 19.54 20.18 20.78 20.85 Majority 0.45 0.44 0.45 0.46 0.47 0.44 0.45 0.47 0.47 0.47 Table 3: Multi-class classification results in Flickr. Labeled Nodes 1% 2% 3% 4% 5% 6% 7% 8% 9% 10% Micro-F1(%) DDRW 38.18 39.46 40.17 41.09 41.76 42.31 42.80 43.29 43.81 44.12 LINE 38.06 39.36 40.30 41.14 41.58 41.93 42.22 42.67 43.09 43.55 DeepWalk 37.95 39.28 40.08 40.78 41.32 41.72 42.12 42.48 42.78 43.05 SpecClust 26.61 35.16 37.28 38.35 38.90 39.51 40.02 40.49 40.86 41.13 EdgeClust 23.90 31.68 35.53 36.76 37.81 38.63 38.94 39.46 39.92 40.07 Majority 24.90 24.84 25.25 25.23 25.22 25.33 25.31 25.34 25.38 25.38 Macro-F1(%) DDRW 29.35 32.07 33.56 34.41 34.89 35.38 35.80 36.15 36.36 36.72 LINE 27.36 31.08 32.51 33.39 34.26 34.81 35.27 35.52 35.95 36.14 DeepWalk 29.22 31.83 33.06 33.90 34.35 34.66 34.96 35.22 35.42 35.67 SpecClust 24.62 29.33 31.30 32.48 33.24 33.89 34.15 34.47 34.77 34.98 EdgeClust 19.48 25.01 28.15 29.17 29.82 30.65 30.75 31.23 31.45 31.54 Majority 6.12 5.86 6.21 6.10 6.07 6.19 6.17 6.16 6.18 6.19 Table 4: Multi-class classification results in YouTube. 1010 10 −2 10 −1 10 0 10 1 10 2 0.15 0.2 0.25 0.3 0.35 0.4 0.45 η Micro F1 0.1 0.2 0.5 0.9 (a) BlogCatalog, η 10 −2 10 −1 10 0 10 1 10 2 0.15 0.2 0.25 0.3 0.35 0.4 0.45 η Micro F1 0.1 0.2 0.5 0.9 (b) Flickr, η 10 −2 10 −1 10 0 10 1 10 2 0.25 0.3 0.35 0.4 0.45 d Micro F1 0.1 0.2 0.5 0.9 (c) BlogCatalog, d 10 −2 10 −1 10 0 10 1 10 2 0.25 0.3 0.35 0.4 0.45 d Micro F1 0.1 0.2 0.5 0.9 (d) Flickr, d Figure 3: Parameter Sensitivity in BlogCatalog and Flickr K 1 5 10 20 50 DDRW(10%) 91.3 71.0 58.3 44.3 31.2 DDRW(50%) 90.9 69.8 62.0 44.7 30.7 DDRW(90%) 90.2 72.8 59.7 43.4 31.1 DeepWalk 91.2 73.2 59.8 46.5 31.2 Random 0.7 0.7 0.7 0.6 0.6 Table 5: Adjacency Predict Accuracy(%) in BlogCatalog. 5.3 Representation Efficiency Finally, we examine the quality of the latent embeddings of entities discovered by DDRW. For network data, our major expectation is that the embedded social space should maintain the topological structure of the network. A visualization of the topological structure in a social space is showed in Figure 1. Besides, we examine the neighborhood structure of the vertices. Specifically, we check the top-K nearest vertices for each vertex in the embedded social space and calculate how many of the vertex pairs have edges between them in the observed network. We call this Adjacency Predict Accuracy. Table 5 shows the results, where DDRW with different percentages of labeled data, DeepWalk and Random are compared in BlogCatalog dataset. The baseline method Random maps all the vertices equably randomly into a fixed-size space. The experiments show that although DeepWalk outperforms on the whole, the performance of DDRW is approximate. DDRW is proved to inherit some important properties in latent representations of the network. 6 Related Work Relational classification (Geman and Geman, 1984; Neville and Jensen, 2000; Getoor and Taskar, 2007) is a class of methods which involve the data item relation links during classification. A number of researchers have studied different methods for network relational learning. (Macskassy and Provost, 2003) present a simple weighted vote relational neighborhood classifier. (Xu et al., 2008) leverage the nonparametric infinite hidden relational model to analyze social networks. (Neville and Jensen, 2005) propose a latent group model for relational data, which discovers and exploits the hidden structures responsible for the observed autocorrelation among class labels. (Tang and Liu, 2009a) propose the latent social dimensions which are represented as continuous values and allow each node to involve at different dimensions in a flexible manner. (Gallagher et al., 2008) propose a method that learn sparsely labeled network data by adding ghost edges between neighbor vertices, and (Lin and Cohen, 2010) by using PageRank. (Wang and Sukthankar, 2013) extend the conventional relational classification to consider more additional features. (Gallagher and Eliassi-Rad, 2008) propose a complimentary approach to within-network classification based on the use of label-independent features. (Henderson et al., 2011) propose a regional feature generating method and demonstrate the usage of the regional feature in within-network and across-network classification. (Tang and Liu, 2009b) propose an edge-centric clustering scheme to extract sparse social dimensions for collective behavior prediction. (Tang and Liu, 2011) propose the concept of social dimensions to represent the latent affiliations of the entities. (Vishwanathan et al., 2010) propose Graph Kernels to use relational data during classification process and (Kang et al., 2012) propose a faster approximated method of Graph Kernels. 7 Conclusion This paper presents Discriminative Deep Random Walk (DDRW), a novel approach for relational multi-class classification on social networks. By simultaneously optimizing embedding and classification objectives, DDRW gains significantly better performances in network classification tasks 1011 than baseline methods. Experiments on different real-world datasets represent adequate stability of DDRW. Furthermore, the representations produced by DDRW is both an intermediate variable and a by-product. Same as other embedding methods like DeepWalk, DDRW can provide wellformed inputs for statistical analyses other than classification tasks. DDRW is also naturally an online algorithm and thus easy to parallel. The future work has two main directions. One is semi-supervised learning. The low proportion of labeled vertices is a good platform for semisupervised learning. Although DDRW has already combined supervised and unsupervised learning together, better performance can be expected after introducing well-developed methods. The other direction is to promote the random walk step. Literature has represented the good combination of random walk and language models, but this combination may be unsatisfactory for classification. It would be great if a better form of random walk is found. Acknowledgments The work was supported by the National Basic Research Program (973 Program) of China (No. 2013CB329403), National NSF of China (Nos. 61322308, 61332007), the Youngth Topnotch Talent Support Program, Tsinghua TNList Lab Big Data Initiative, and Tsinghua Initiative Scientific Research Program (No. 20141080934). References Lada A. Adamic and Eytan Adar. 2003. Friends and neighbors on the web. Social Networks, 25:211– 230. Reid Andersen, Fan R. K. Chung, and Kevin J. Lang. 2006. Local graph partitioning using pagerank vectors. In Foundations of Computer Science, pages 476–486. Koby Crammer and Yoram Singer. 2002. On the algorithmic implementation of multiclass kernel-based vector machines. Journal of Machine Learning Research, 2:265–292. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, XiangRui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A library for large linear classification. Journal of Machine Learning Research, 9:1871–1874. Franc¸ois Fouss, Alain Pirotte, Jean-Michel Renders, and Marco Saerens. 2007. Random-walk computation of similarities between nodes of a graph with application to collaborative recommendation. IEEE Transactions on Knowledge and Data Engineering, 19:355–369. Brian Gallagher and Tina Eliassi-Rad. 2008. Leveraging label-independent features for classification in sparsely labeled networks: An empirical study. In Proceedings of the Second International Conference on Advances in Social Network Mining and Analysis, pages 1–19. Brian Gallagher, Hanghang Tong, Tina Eliassi-Rad, and Christos Faloutsos. 2008. Using ghost edges for classification in sparsely labeled networks. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 256–264. Stuart Geman and Donald Geman. 1984. Stochastic relaxation, gibbs distributions, and the bayesian restoration of images. IEEE Trans. Pattern Anal. Mach. Intell., 6:721–741. Lise Getoor and Ben Taskar. 2007. Introduction to statistical relational learning. The MIT Press. Keith Henderson, Brian Gallagher, Lei Li, Leman Akoglu, Tina Eliassi-Rad, Hanghang Tong, and Christos Faloutsos. 2011. It’s who you know: graph mining using recursive structural features. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 663–671. Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2012. Improving neural networks by preventing co-adaptation of feature detectors. CoRR, abs/1207.0580. Peter D. Hoff, Adrian E. Raftery, and Mark S. Handcock. 2002. Latent space approaches to social network analysis. Journal of the American Statistical Association, 97:1090–1098. U. Kang, Hanghang Tong, and Jimeng Sun. 2012. Fast random walk graph kernel. In SDM, pages 828–838. Frank Lin and William W. Cohen. 2010. Semisupervised classification of network data using very few labels. In Proceedings of the 2010 International Conference on Advances in Social Networks Analysis and Mining, pages 192–199. Sofus A. Macskassy and Foster J. Provost. 1977. An information flow model for conflict and fission in small groups. Journal of Anthropological Research, 33:452–473. Sofus A. Macskassy and Foster Provost. 2003. A simple relational classifier. In Proceedings of the MultiRelational Data Mining Workshop at the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1012 Sofus A. Macskassy and Foster J. Provost. 2007. Classification in networked data: A toolkit and a univariate case study. Journal of Machine Learning Research, 8:935–983. Julien Mairal, Jean Ponce, Guillermo Sapiro, Andrew Zisserman, and Francis R. Bach. 2009. Supervised dictionary learning. In Advances in Neural Information Processing Systems, pages 1033–1040. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 1991. Stochastic gradient learning in neural networks. In Proceedings of Neuro-Nˆımes 91. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119. Andriy Mnih and Geoffrey E. Hinton. 2009. A scalable hierarchical distributed language model. In Advances in Neural Information Processing Systems, pages 1081–1088. Frederic Morin and Yoshua Bengio. 2005. Hierarchical probabilistic neural network language model. In Proceedings of the International Workshop on Artificial Intelligence and Statistics, pages 246–252. Jennifer Neville and David Jensen. 2000. Iterative classification in relational data. In Proceedings of AAAI-2000 Workshop on Learning Statistical Models from Relational Data, pages 13–20. Jennifer Neville and David Jensen. 2005. Leveraging relational autocorrelation with latent group models. In Proceedings of the 4th International Workshop on Multi-relational Mining, pages 49–55. Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. DeepWalk: online learning of social representations. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 701–710. David Savage, Xiuzhen Zhang, Xinghuo Yu, Pauline Lienhua Chou, and Qingmai Wang. 2014. Anomaly detection in online social networks. Social Networks, 39:62–70. Lei Tang and Huan Liu. 2009a. Relational learning via latent social dimensions. In Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 817–826. Lei Tang and Huan Liu. 2009b. Scalable learning of collective behavior based on sparse social dimensions. In Proceedings of the 18th ACM Conference on Information and Knowledge Management, pages 1107–1116. Lei Tang and Huan Liu. 2011. Leveraging social media networks for classification. Data Mining and Knowledge Discovery, 23:447–478. Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. 2015. LINE: Large-scale information network embedding. In Proceedings of the 24th International Conference on World Wide Web, pages 1067–1077. S. V. N. Vishwanathan, Nicol N. Schraudolph, Risi Kondor, and Karsten M. Borgwardt. 2010. Graph kernels. Journal of Machine Learning Research, 11:1201–1242. Xi Wang and Gita Sukthankar. 2013. Multi-label relational neighbor classification using social context features. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 464–472. Zhao Xu, Volker Tresp, Shipeng Yu, and Kai Yu. 2008. Nonparametric relational learning for social network analysis. In the 2nd SNA-KDD Workshop on Social Network Mining and Analysis. Cheng Yang, Zhiyuan Liu, Deli Zhao, Maosong Sun, and Edward Y. Chang. 2015. Network representation learning with rich text information. In Proceedings of the 24th International Joint Conference on Artificial Intelligence, pages 2111–2117. Yiming Yang. 1999. An evaluation of statistical approaches to text categorization. Information Retrieval, 1:69–90. Jun Zhu, Amr Ahmed, and Eric P. Xing. 2012. MedLDA: maximum margin supervised topic models. The Journal of Machine Learning Research, 13:2237–2278. Jun Zhu. 2012. Max-margin nonparametric latent feature models for link prediction. In Proceedings of the 29th International Conference on Machine Learning, pages 719–726. 1013
2016
95
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1014–1023, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Normalising Medical Concepts in Social Media Texts by Learning Semantic Representation Nut Limsopatham and Nigel Collier Language Technology Lab Department of Theoretical and Applied Linguistics University of Cambridge Cambridge, UK {nl347,nhc30}@cam.ac.uk Abstract Automatically recognising medical concepts mentioned in social media messages (e.g. tweets) enables several applications for enhancing health quality of people in a community, e.g. real-time monitoring of infectious diseases in population. However, the discrepancy between the type of language used in social media and medical ontologies poses a major challenge. Existing studies deal with this challenge by employing techniques, such as lexical term matching and statistical machine translation. In this work, we handle the medical concept normalisation at the semantic level. We investigate the use of neural networks to learn the transition between layman’s language used in social media messages and formal medical language used in the descriptions of medical concepts in a standard ontology. We evaluate our approaches using three different datasets, where social media texts are extracted from Twitter messages and blog posts. Our experimental results show that our proposed approaches significantly and consistently outperform existing effective baselines, which achieved state-of-the-art performance on several medical concept normalisation tasks, by up to 44%. 1 Introduction Existing studies (O’Connor et al., 2014; Limsopatham and Collier, 2015a; Limsopatham and Collier, 2015b) have shown that data from social media (e.g. Twitter1 and Facebook2) can be leveraged to improve the understanding of patients’ ex1http://twitter.com 2http://facebook.com perience in healthcare, such as the spread of infectious diseases and side-effects of drugs. However, the lexical and grammatical variability of the language used in social media poses a key challenge for extracting information (Baldwin et al., 2013; O’Connor et al., 2014). In particular, the frequent use of informal language, non-standard grammar and abbreviation forms, as well as typos in social media messages has to be taken into account by effective information extraction systems. The task of medical concept normalisation for social media text, which aims to map a variable length social media message to a medical concept in some external coding system, is faced with a similar challenge (Limsopatham and Collier, 2015b). Traditional approaches, e.g. (Ristad and Yianilos, 1998; Aronson, 2001; Lu et al., 2011; McCallum et al., 2012), used proximity matching or heuristic string matching rules based on dictionary lookup when mapping texts to medical concepts. For example, Ristad and Yianilos (1998) incorporated edit-distance when mapping similar texts. The MetaMap system of Aronson (2001) applied a rule-based approach using pre-defined variants of terms when mapping texts to medical concepts in the UMLS Metathesaurus3. However, as shown in Table 1, existing string matching techniques may not be able to map the social media message “moon face and 30 lbs in 6 weeks” to the medical concept ‘Weight Gain’, or map “head spinning a little” to ‘Dizziness’, as no words in the social media messages and the description of the medical concepts correspond. Recent studies, e.g. (Leaman et al., 2013; Leaman and Lu, 2014; Limsopatham and Collier, 2015a), applied machine learning techniques to take into account relationships between different words (e.g. synonyms) when performing normal3https://www.nlm.nih.gov/pubs/ factsheets/umlsmeta.html 1014 Social media message Description of corresponding medical concept lose my appetite Loss of appetite i don’t hunger or thirst Loss of Appetite hungry Hunger moon face and 30 lbs in 6 weeks Weight Gain gained 7 lbs Weight Gain lose the 10 lbs Body Weight Decreased feeling dizzy ... Dizziness head spinning a little Dizziness terrible headache!! Headache Table 1: Examples of social media messages and their related medical concepts. isation. For instance, the DNorm system of Leaman et al. (2013), which achieved state-of-the-art performance on several medical concept normalisation tasks for medical articles (Do˘gan et al., 2014) and patient records (Suominen et al., 2013), used a pairwise learning-to-rank technique to learn the similarity between different terms when performing concept normalisation. Limsopatham and Collier (2015a) leveraged translations between the informal language used in social media and the formal language used in the description of medical concepts in an ontology. However, we argue that effective concept normalisation requires a system to take into account the semantics of social media messages and medical concepts. For example, to be able to map from the social media message “i don’t hunger or thirst” to the medical concept ‘Loss of Appetite’, a normalisation system has to take into account the semantics of the whole message; otherwise, “i don’t hunger or thirst” may be mapped to the medical concept ‘Hunger’, because they contain the term “hunger” in common. In this work, we go beyond string matching. We propose to learn and exploit the semantic similarity between texts from social media messages and medical concepts using deep neural networks. In particular, we investigate the use of techniques from two families of deep neural networks, i.e. a convolutional neural network (CNN) and a recurrent neural network (RNN), to learn the mapping between social media texts and medical concepts. We evaluate our approaches using three different datasets that contain messages from Twitter and blog posts. Our experimental results show that our proposed approaches significantly outperform existing strong baselines (e.g. DNorm) across all of the three datasets. The performance improvement is by up to 44%. The main contributions of this paper are threefold: 1. We propose two novel approaches based on CNN and RNN for medical concept normalisation. 2. We introduce two datasets with the goldstandard mappings between medical concepts and social media texts extracted from tweets and blog posts, respectively. 3. We thoroughly evaluate our proposed approaches using these two datasets and an existing dataset of tweets related to the topic of adverse drug reactions (ADRs) (Limsopatham and Collier, 2015a). The remainder of this paper is organised as follows. In Section 2, we discuss related work and position our paper in the literature. Section 3 introduces our neural network approaches for medical concept normalisation. We describe our experimental setup and empirically evaluate our proposed approaches in Sections 4 and 5, respectively. We provide further analysis and discussion of our approaches in Section 6. Finally, Section 7 provides concluding remarks. 2 Related Work Existing techniques for concept normalisation are mostly based on string matching (e.g. (Tsuruoka et al., 2007; Ristad and Yianilos, 1998; Lu et al., 2011; McCallum et al., 2012). For example, McCallum et al. (2012) used conditional random field to learn edit distance between phrases. In the medical domain, Tsuruoka et al. (2007) learned mappings between phrases in medical documents and medical concepts by using string matching features, such as character bigrams and 1015 common tokens. Meanwhile, Metke-Jimenez and Karimi (2015), and O’Connor et al. (2014) used term weighting techniques, such as TF-IDF and BM25 (Robertson and Zaragoza, 2009) to retrieve relevant concepts. We tackle the concept normalisation task in a different manner. In particular, we use deep neural networks to capture the similarity and/or dependency between terms and effectively represent a given social media message in a low dimensional vector representation, before mapping it to a medical concept. Another research area related to this work is the exploitation of word embeddings (i.e. distributed vector representation of words). It has been empirically shown that word embeddings can capture semantic and syntactic similarities between words (Turian et al., 2010; Mikolov et al., 2013b; Pennington et al., 2014; Levy and Goldberg, 2014). The cosine similarity between vectors of words has a positive correlation with the semantic similarity between them (Mikolov et al., 2013b; Pennington et al., 2014). Importantly, word embeddings have been effectively used for several NLP tasks, such as named entity recognition (Passos et al., 2014), machine translation (Mikolov et al., 2013a) and part-of-speech tagging (Turian et al., 2010). In the context of concept normalisation, Limsopatham and Collier (2015a) showed that effective performance could be achieved by mapping the processed social media messages and medical concepts using the similarity of their embeddings. In this work, we use word embeddings as inputs of deep neural networks, which would allow an effective representation of words when learning the concept normalisation. Neural networks, such as convolutional neural networks (CNN) and recurrent neural networks (RNN), have been effectively applied to NLP tasks, such as NER, sentiment classifications and machine translation (Collobert et al., 2011; Kim, 2014; Bahdanau et al., 2014). For example, Collobert et al. (2011) effectively used a multilayer neural network for chunking, part-ofspeech tagging, NER and semantic role labelling. Kim (2014) effectively used CNN with pre-built word embeddings when performing sentence classifications. Kalchbrenner et al. (2014) learned representation of sentences by using CNN. Meanwhile, Bahdanau et al. (2014) used RNN to encode a sentence written in one language (e.g. French) into a fixed length vector before decoding it to Figure 1: Our CNN architecture for medical concept normalisation. a sentence in another language (e.g. English) for translation. Socher et al. used recursive neural networks to model sentences for different tasks, including paraphrase detection (Socher et al., 2011) and sentence classification (Socher et al., 2013). In this paper, we investigate only the use of CNN and RNN for medical concept normalisation, as recursive neural networks require parse trees of input sentences while grammatical rules are typically ignored in social media messages. 3 Neural Networks for Concept Normalisation Next, we introduce our medical concept normalisation approaches based on CNN and RNN in Sections 3.1 and 3.2, respectively. 3.1 CNN for Concept Normalisation Our first approach uses CNN to learn the semantic representation of a social media message before mapping it to an appropriate medical concept. We use a CNN architecture with a single convolutional and pooling layer, as shown in Figure 1. Specifically, we firstly represent a given social media message of length l words (padded where necessary) using a sentence matrix S ∈Rd×l: S =   | | | | x1 x2 x3 ... xl | | | |   (1) where each column of S is the d-dimensional vector (i.e. embedding) xi ∈Rd of each word in the social media message, which can be retrieved from pre-built word embeddings. This allows the model to take into account semantic features from the embeddings of each word. 1016 Figure 2: Our RNN architecture for medical concept normalisation. We then apply a convolution operation using a filter w ∈Rd×h to a window of h words. In particular, the filter w is convolved over the sequence of words in the sentence matrix S to create a feature matrix C. Each feature ci in C is extracted from a window of words xi:i+h−1, as follow: ci = f(w · xi:i+h−1 + b) (2) where f is an activation function, such as sigmoid or tanh, and b ∈R is a bias. Note that multiple filters (e.g. using different size h of window of words) can be used to extract multiple features. This convolution operation enables the learning of dependencies between words from their semantic representation (i.e. word embeddings). In order to capture the most important features, max pooling (Collobert et al., 2011) is applied to take the maximum value of each row in the matrix C: cmax =   max(C1,:) ... max(Cd,:)   (3) Finally, the fixed sized vector cmax forms a fully connected layer, which is used as inputs of softmax for multi-class classification. Indeed, the vector cmax provides a sentence representation that captures an extensional semantic information of the social media message for softmax to map to an appropriate medical concept. 3.2 RNN for Concept Normalisation Our second approach uses RNN to capture the semantics of sequences of words in a social media message during normalisation. This approach is different from the CNN approach (introduced Section 3.1) in that instead of using the convolutional TwADR-S TwADR-L AskAPatient |Q| 201 1,436 8,662 |VQ| 488 995 2,872 |C| 58 2,200 1,036 |VC| 98 2,394 1,200 |Q 7→C|avg 3.4655 0.6428 8.3610 |Q 7→C|SD 5.6264 3.3168 39.2009 |Q 7→C|min 1 0 1 |Q 7→C|max 35 58 1,073 Table 2: Statistics of the datasets used in the experiments. |Q|: Number of queries. |VQ|: Vocabulary size of queries. |C|: Number of target concepts. |VC|: Vocabulary size of definition of target concepts. |Q 7→C|avg and |Q 7→C|SD: Average number of queries mapped to each target concept, and its standard deviation (SD). |Q 7→C|min and |Q 7→C|max: Mininum and maximum number of queries mapped to a given target concept, respectively. layer to learn the representation of social media messages (i.e. the vector representation at the fully connected layer), our RNN approach deploys a recurrent layer, as shown in Figure 2. Similar to the CNN approach, we initially represent a social media message of length l words using a sentence matrix S ∈Rd×l, as in Equation (1). Then, the recurrent layer processes the vector xi of each word in the social media message sequentially and produces a hidden state output hi ∈Rk, where k ∈Z and k > 0. Importantly, when processing each input vector xi, the hidden state output hi−1 from the previous word is also recursively taken into account: hi = f (hi−1, xi) (4) where f is a recurrent unit, such as long shortterm memory (LSTM) (Hochreiter and Schmidhuber, 1997) and gated recurrent unit (GRU) (Cho et al., 2014). Finally, the hidden state output hl, which is the output from processing the last word of the social media message, is used as an input of the softmax for identifying the appropriate concept, in the same manner as the vector at the fully connected layer of the CNN approach in Section 3.1. 4 Experimental Setup 4.1 Datasets To evaluate our proposed approaches, we use three different datasets (namely, TwADR-S, TwADR-L 1017 and AskAPatient)4, where the task is to map a social media phrase to a relevant medical concept. In these datasets, a given social media phrase is mapped to only one medical concept. Table 2 shows statistics for the three datasets. In particular, TwADR-S is the dataset provided by Limsopatham and Collier (2015a), which contains 201 Twitter phrases and their corresponding SNOMED-CT5 concept. The total number of target concepts is 58, while on average a medical concept can be mapped by 3.47 queries with the standard deviation of 5.63. The TwADR-L dataset is our new dataset that we constructed from a collection of three months of tweets (between July and November 2015), downloaded using the Twitter Streaming API6 by filtering using the name of a pre-defined set of drugs, which have been used in the literature for ADR profiling (e.g. cognitive enhancers) (Bender et al., 2007). These tweets were sampled and then annotated by undergraduate-level linguists. This collection contains 1,436 Twitter phrases that can be mapped to one of 2,220 medical concepts from the SIDER 4 database of drug profiles7. Note that 1,947 from the 2,220 concepts are not relevant to any of the Twitter phrases. For the AskAPatient dataset, we extracted the gold-standard mappings of social media messages and medical concepts from the ADR annotation collection of Karimi et al. (2015). Our AskAPatient dataset contains 8,662 phrases8, each of which can be mapped to one of the 1,036 medical concepts from SNOMED-CT and AMT (the Australian Medicines Terminology). We expect this dataset to be less difficult than TwADR-S and TwADR-L, as the nature of blog posts is less informal and ambiguous than Twitter messages. For each of the datasets, we randomly divide it into ten equally folds, so that our approaches and the baselines would be trained on the same sets of data. We evaluate our approaches based on the accuracy performance, averaged across the ten folds. The significant difference between the performance of our approaches and the baselines is measured using the paired t-test (p < 0.05). 4TwADR-L and AskAPatient datasets are available on Zenodo.org (DOI:http://dx.doi.org/10.5281/zenodo.55013). 5http://www.ihtsdo.org/snomed-ct. 6https://dev.twitter.com/streaming/ overview 7http://sideeffects.embl.de/ 8From blog posts on http://www.askapatient. com website. 4.2 Pre-trained Word Embeddings As our CNN (Section 3.1) and RNN (Section 3.2) approaches require word vectors as inputs, we investigate the use of two different pre-trained word embeddings. The first word embeddings (denoted, GNews) are the publicly available 300dimension embeddings (vocabulary size of 3M) that were induced from 100 billion words from Google News using word2vec (Mikolov et al., 2013b)9, which has been shown to be effective for several tasks (Baroni et al., 2014; Kim, 2014). The second word embeddings (denoted, BMC) induced from 854M words of medical articles downloaded from BioMed Central10 by using the skip-gram model from word2vec (with default parameters). The BMC embeddings also have 300 dimension. For the words that do not existing in any embeddings, we use a vector of random values sampled from [−0.25, 0.25]. As an alternative, we also use randomly generated embeddings (denoted, Rand) with 300 dimensions, where a vector representation of each word is randomly sampled from [−0.25, 0.25]. This allows the investigation of the effectiveness of our approaches when the semantic information from pre-built embeddings is not available. 4.3 Parameters of Our CNN and RNN Approaches For our CNN approach, we use the filter w with the window size h of 3, 4 and 5, each of which with 100 feature maps, which have shown to be effective for modelling sentences in sentiment analysis (Kim, 2014). For the RNN, we use gated recurrent unit (GRU) (Cho et al., 2014) and set the size k of the output vector of each recurrent unit to 100. In addition, for both CNN and RNN, we use rectifier linear unit (ReLU) (Nair and Hinton, 2010) as activation functions. We also apply L2 regularisation of the weight vectors. We train the models over a mini-batch of size 50 to minimise the negative log-likelihood of correct predictions. The stochastic gradient descent with back-propagation is performed using Adadelta update rule (Zeiler, 2012). We initially set the number of epochs for training both CNN and RNN approaches to be 100, and allow the models to update the input 9https://code.google.com/p/word2vec/ 10http://www.biomedcentral.com/about/ datamining 1018 embeddings in the sentence matrix S. Later, in Sections 6.2 and 6.3, we discuss the performance achieved as we vary the number of epochs used for training the models, and the performance achieves when we allow and do not allow the models to update the input embeddings, respectively. 4.4 Baselines We consider five different baselines as follows: 1. TF-IDF: A traditional term matching-based approach, using the TF-IDF score. 2. BM25: A traditional term matching-based approach, using the BM25 score, which has shown to be effective for several text retrieval tasks (Robertson and Zaragoza, 2009) 3. EmbSim: The cosine similarity between the word vector representation of a social media phrase and the description of a medical concept. If the phrase (or the description) contains several words, we represent it by adding up the values of the same dimension of the embedding of each word. 4. DNorm: A machine learning-based approach that exploits the relationships between words (e.g. synonyms) learned from training data (Leaman and Lu, 2014). This approach achieved state-of-the-art performance for several medical concept normalisation tasks (Suominen et al., 2013; Do˘gan et al., 2014). Note that we customise the opensource version11 of DNorm to enable the mapping to a specific set of the target concepts for each dataset. 5. P-MT: The concept normalisation approach that translates social media texts to a formal medical text before mapping to appropriate medical concepts using the cosine similarity of their embeddings (Limsopatham and Collier, 2015a). We use the variant where the top-5 translations are used to map the medical concepts by taking the ranked position into account. We calculate the cosine similarity using either the GNews or the BMC embeddings. 6. LogisticRegression: A variant of our proposed approaches where we concatenate embeddings of terms (padded where necessary) 11http://www.ncbi.nlm.nih.gov/ CBBresearch/Lu/Demo/tmTools/#DNorm in each social media phrase into a fixed-size sentence vector, before using this vector as input features for a multi-class logistic regression classifier. Another possible baseline is a word-sense disambiguation system, such as IMS (Zhong and Ng, 2010). Nevertheless, the results from our initial experiments using IMS showed that it could not perform effectively on the three datasets. This is because the performance of IMS depends heavily on the contexts (i.e. words surrounding the input phrase); however, such contexts are not available in any of the three datasets. Therefore, we do not report the performance of IMS in this paper. Note that for the baselines that require training data (i.e. DNorm and P-MT) and our two proposed approaches, apart from the training data provides with each fold of the datasets, we also train them using the descriptions of the target medical concepts, as these data are also used by the nonsupervised baselines (i.e. TF-IDF, BM25 and EmbSim). 5 Experimental Results In this section, we compare the performance of our CNN and RNN approaches for medical concept normalisation against the six baselines, introduced in Section 4.4. Table 3 compares the performances of our proposed approaches with the baselines in terms of accuracy on the three datasets (i.e. TwADR-S, TwADR-L, AskAPatient). Overall, as expected, the accuracy performance achieved by our approaches and the baselines on the AskAPatient dataset is higher than the TwADR-L and TwADR-S. This is due to nature use of language in Twitter, which is more ambiguous and informal than blog posts. When comparing among the existing baseline approaches, we observe that DNorm and P-MT are the most effective baselines. In particular, DNorm outperforms the other baselines for the TwADR-S (accuracy 0.2983) and AskAPatient (accuracy 0.7339) datasets, while P-MT with GNews embeddings is the most effective baseline for the TwADR-L dataset (accuracy 0.3371). In addition, term matchingbased approaches, i.e. TF-IDF (accuracy 0.1638, 0.2293 and 0.5547, respectively) and BM25 (accuracy 0.1638, 0.2300 and 0.5546), achieve almost similar performances, which are also comparable to the performances of EmbSim baselines. When comparing the effectiveness of different 1019 Approach Word Embeddings Accuracy TwADR-S TwADR-L AskAPatient TF-IDF 0.1638 0.2293 0.5547 BM25 0.1638 0.2300 0.5546 EmbSim GNews 0.2494 0.2326 0.5422 EmbSim BMC 0.1348 0.2057 0.5141 DNorm 0.2983 0.3099 0.7339 P-MT GNews 0.2346 0.3371 0.7235 P-MT BMC 0.1248 0.3114 0.7126 LogisticRegression GNews 0.3186 0.3409 0.7767 LogisticRegression BMC 0.3036 0.3548 0.7752 CNN Rand 0.3229• 0.4267∗◦• 0.8013∗◦• CNN GNews 0.4174∗◦• 0.4478∗◦• 0.8141∗◦• CNN BMC 0.3921∗◦• 0.4415∗◦• 0.8139∗◦• RNN Rand 0.2936• 0.3791∗◦• 0.7991∗◦• RNN GNews 0.3529∗◦• 0.3882∗◦• 0.7998∗◦• RNN BMC 0.3331• 0.3847∗◦• 0.7996∗◦• Table 3: The accuracy performance of our proposed approaches and the baselines. Significant differences (p < 0.05, paired t-test) compared to the DNorm, P-MT with GNews embeddings, and P-MT with BMC embeddings, are denoted ∗, ◦and •, respectively. pre-trained embeddings used in EmbSim and PMT, we observe that GNews is more effective than BMC for both approaches, across all of the three datasets. Next, we discuss the performance of our CNN and RNN approaches. From Table 3, we observe that both CNN and RNN markedly outperform all of the existing baselines for all of the three datasets. When compared with DNorm and P-MT with GNews baselines, which are the most effective existing baselines, we observe that both CNN and RNN significantly (p < 0.05, paired t-test) outperform the two baselines for all of the three datasets. Indeed, for the TwADR-L dataset, CNN with GNews (accuracy 0.4478) outperforms DNorm (accuracy 0.3099) by 44%. In addition, the choice of embeddings has a marked impact on the achieved performance. In particular, the GNews embeddings benefit both CNN and RNN more than the BMC embeddings, which is in line with the previous finding that GNews is more useful than BMC for the EmbSim and PMT baselines. On the other than, the randomly generated embeddings (i.e. Rand) are less useful. These results show that the semantics captured in word embeddings are useful for both CNN and RNN approaches for medical concept normalisation. However, for both CNN and RNN, the choice of embeddings that are employed has less impact on the performance for the AskAPatient dataset, which has greater number of training data. Furthermore, we observe that the LogisticRegression baseline, a variant of our proposed approach that uses the multi-class logistic regression instead of neural networks for identifying relevance concepts, also outperforms the all of the existing baselines. However, it performs worse than both CNN and RNN approaches. This shows that while logistic regression can exploit the semantics of embeddings of individual terms in social media texts (at the word level), it cannot learn the semantics of the whole phrase as effectively as CNN and RNN. 6 Analysis & Discussions In this section, we further analyse the performance achieved by our proposed approaches. As the performance achieved by our CNN approach is better than that of our RNN approach, we discuss only our CNN approach in this section. 6.1 Failure Analysis We first discuss the results achieved by the baselines and our CNN approach. As expected, we observe that all approaches perform very well for the social media phrases that lexically match with the definition of the medical concepts, e.g. the social media phrase “attention deficit disorder” is 1020 (a) TwADR-S (b) TwADR-L (c) AskAPatient Figure 3: The accuracy performance achieved by training with different numbers of epochs for the three datasets. mapped to the medical concept ‘Attention Deficit Disorder’. However, for a more complex phrases, such as “appetite on 10”, “my appetite way up”, “suppressed appetite”, the baselines, including DNorm and P-MT, cannot effectively incorporate the modifiers of the word “appetite” in different phrases. For example, “appetite on 10”, “my appetite way up” should be mapped to ‘Increased Appetite’, while “suppressed appetite” should be mapped to ‘Loss of Appetite’. On the other hand, for social media phrases that do not have any terms in common with the definition of any medical concepts, all of the baselines performs poorly for most of the cases. For instance, even though DNorm can learn that the term “focusing” has some relationship with “concentration”, it maps any phrases containing “focusing” to the ‘Attention Concentration Difficulty’ concept, including phrases, such as “focusing monster”, which should be mapped to ‘Consciousness Abnormal’. Our CNN approach could deal with most of these cases effectively, as it considers the semantic representation of the whole phrase during normalisation. 6.2 Impact of Number of Training Epochs Next, we discuss the normalisation performance as we vary, between 1 and 200, the number of epochs used for training our CNN model. Figures 3(a), 3(b) and 3(c) show the performance in terms of accuracy achieved during training and testing for the TwADR-S, TWADR-L and AskAPatient datasets, respectively. We observe that training can be effectively achieved at around 60 - 70 epochs for the TwADR-S and TwADR-L datasets, and around 40 epochs for the AskAPatient dataset, before the performance becomes stable. We notice a gap between the performance achieved during training and testing, especially for the TwADR-S Dataset Accuracy CNN with CNN with updated emb. fixed emb. TwADR-S 0.4174 0.4369 TwADR-L 0.4478 0.4590 AskAPatient 0.8141• 0.7869 Table 4: The accuracy performance of our CNN approach with the GNews embeddings, when allowing (updated emb.) and not allowing (fixed emb.) the model to update the input word embeddings. Significant difference (p < 0.05, paired ttest) between the performance achieved by the two variants, on each dataset, is denoted •. and TwADR-L datasets; however, this gap should be narrower if more training data are available. 6.3 Impact of Fixed Embeddings In this section, we compare the performance of our CNN with GNews embeddings when we allow (updated emb.) and when we do not allow (fixed emb.) the input embeddings to be updated. Table 4 reports the accuracy performance of the two variants for the three datasets. We observe that for TwADR-S and TwADR-L datasets, which are smaller datasets (dataset size of 201 and 1,436, respectively), a better performance can be achieved if the model is not allowed to update the embeddings of the input phrases. In contrast, for the AskAPatient dataset (dataset size of 8,662), allowing the model to update the embeddings results in a significantly (paired t-test, p < 0.05) better performance. We observe the same trends of performance when using BMC embeddings. These results suggest that for small datasets, we should leverage semantics from pre-built word embeddings and do not allow the model to update the 1021 embeddings. Meanwhile, for a larger dataset, further performance improvement can be achieved by allowing the model to update the embeddings. 7 Conclusions We have motivated the importance of semantics when normalising medical concepts in social media messages. In particular, as social media messages are typically ambiguous, we argue that effective concept normalisation should deal with them at the semantic level. To do so, we introduced two neural network-based approaches for medical concept normalisation, which are based on convolutional and recurrent neural network architectures. Our experimental results evaluated on three different social media datasets showed that both of our approaches markedly and significantly outperformed several strong baselines, including an existing approach that achieved state-of-the-art performance on several medical concept normalisation tasks. From the analysis of the results, we found that while some existing approaches can capture synonyms of words, they could not leverage the semantic meaning of the social media message. Our approaches overcomes this by learning the semantic representation of the social media message before passing it to a classifier to match an appropriate concept. Acknowledgements The authors wish to thank funding support from the EPSRC (grant number EP/M005089/1). References Alan R Aronson. 2001. Effective mapping of biomedical text to the umls metathesaurus: the metamap program. In AMIA, pages 17–21. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Timothy Baldwin, Paul Cook, Marco Lui, Andrew MacKinlay, and Li Wang. 2013. How noisy social media text, how diffrnt social media sources. In IJCNLP, pages 356–364. Marco Baroni, Georgiana Dinu, and Germ´an Kruszewski. 2014. Don’t count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In ACL, pages 238–247. Andreas Bender, Josef Scheiber, Meir Glick, John W Davies, Kamal Azzaoui, Jacques Hamon, Laszlo Urban, Steven Whitebread, and Jeremy L Jenkins. 2007. Analysis of pharmacology data and the prediction of adverse drug reactions and off-target effects from chemical structure. ChemMedChem, 2(6):861–873. Kyunghyun Cho, Bart van Merri¨enboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493–2537. Rezarta Islamaj Do˘gan, Robert Leaman, and Zhiyong Lu. 2014. Ncbi disease corpus: a resource for disease name recognition and concept normalization. Journal of biomedical informatics, 47:1–10. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. In ACL, pages 655–665. Sarvnaz Karimi, Alejandro Metke-Jimenez, Madonna Kemp, and Chen Wang. 2015. Cadec: A corpus of adverse drug event annotations. Journal of biomedical informatics, 55:73–81. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In EMNLP, pages 1746– 1751. Robert Leaman and Zhiyong Lu. 2014. Automated disease normalization with low rank approximations. In ACL, pages 24–28. Robert Leaman, Rezarta Islamaj Do˘gan, and Zhiyong Lu. 2013. Dnorm: disease name normalization with pairwise learning to rank. Bioinformatics, 29(22):2909–2917. Omer Levy and Yoav Goldberg. 2014. Dependencybased word embeddings. In ACL, pages 302–308. Nut Limsopatham and Nigel Collier. 2015a. Adapting phrase-based machine translation to normalise medical terms in social media messages. In EMNLP, pages 1675–1680. Nut Limsopatham and Nigel Collier. 2015b. Towards the semantic interpretation of personal health messages from social media. In Proceedings of the ACM First International Workshop on Understanding the City with Urban Informatics, UCUI ’15, pages 27– 30, New York, NY, USA. ACM. 1022 Zhiyong Lu, Hung-Yu Kao, Chih-Hsuan Wei, Minlie Huang, Jingchen Liu, Cheng-Ju Kuo, ChunNan Hsu, Richard TH Tsai, Hong-Jie Dai, Naoaki Okazaki, et al. 2011. The gene normalization task in biocreative iii. BMC bioinformatics, 12(Suppl 8):S2. Andrew McCallum, Kedar Bellare, and Fernando Pereira. 2012. A conditional random field for discriminatively-trained finite-state string edit distance. arXiv preprint arXiv:1207.1406. Alejandro Metke-Jimenez and Sarvnaz Karimi. 2015. Concept extraction to identify adverse drug reactions in medical forums: A comparison of algorithms. arXiv preprint arXiv:1504.06936. Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In NIPS, pages 3111–3119. Vinod Nair and Geoffrey E Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In ICML, pages 807–814. Karen O’Connor, Pranoti Pimpalkhute, Azadeh Nikfarjam, Rachel Ginn, Karen L Smith, and Graciela Gonzalez. 2014. Pharmacovigilance on twitter? mining tweets for adverse drug reactions. In AMIA, volume 2014, pages 924–933. Alexandre Passos, Vineet Kumar, and Andrew McCallum. 2014. Lexicon infused phrase embeddings for named entity resolution. arXiv preprint arXiv:1404.5367. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. GloVe: Global vectors for word representation. In EMNLP, pages 1532–1543. Eric Sven Ristad and Peter N Yianilos. 1998. Learning string-edit distance. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 20(5):522–532. Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and beyond. Now Publishers Inc. Richard Socher, Eric H Huang, Jeffrey Pennin, Christopher D Manning, and Andrew Y Ng. 2011. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In NIPS, pages 801–809. Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP, pages 1631–1642. Hanna Suominen, Sanna Salanter¨a, Sumithra Velupillai, Wendy W Chapman, Guergana Savova, Noemie Elhadad, Sameer Pradhan, Brett R South, Danielle L Mowery, Gareth JF Jones, et al. 2013. Overview of the share/clef ehealth evaluation lab 2013. In Information Access Evaluation. Multilinguality, Multimodality, and Visualization, pages 212–231. Springer. Yoshimasa Tsuruoka, John McNaught, Sophia Ananiadou, et al. 2007. Learning string similarity measures for gene/protein name dictionary look-up using logistic regression. Bioinformatics, 23(20):2768–2774. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In ACL, pages 384– 394. Matthew D Zeiler. 2012. ADADELTA: an adaptive learning rate method. arXiv preprint arXiv:1212.5701. Zhi Zhong and Hwee Tou Ng. 2010. It makes sense: A wide-coverage word sense disambiguation system for free text. In ACL, pages 78–83. 1023
2016
96
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1024–1033, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Agreement-based Learning of Parallel Lexicons and Phrases from Non-Parallel Corpora Chunyang Liu†, Yang Liu†#∗, Huanbo Luan†, Maosong Sun†#, and Heng Yu‡ † State Key Laboratory of Intelligent Technology and Systems Tsinghua National Laboratory for Information Science and Technology Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China # Jiangsu Collaborative Innovation Center for Language Competence, Jiangsu, China ‡ Samsung R&D Institute of China, Beijing 100028, China {liuchunyang2012,liuyang.china,luanhuanbo}@gmail.com, [email protected] [email protected] Abstract We introduce an agreement-based approach to learning parallel lexicons and phrases from non-parallel corpora. The basic idea is to encourage two asymmetric latent-variable translation models (i.e., source-to-target and target-to-source) to agree on identifying latent phrase and word alignments. The agreement is defined at both word and phrase levels. We develop a Viterbi EM algorithm for jointly training the two unidirectional models efficiently. Experiments on the ChineseEnglish dataset show that agreementbased learning significantly improves both alignment and translation performance. 1 Introduction Parallel corpora, which are large collections of parallel texts, serve as an important resource for inducing translation correspondences, either at the level of words (Brown et al., 1993; Smadja and McKeown, 1994; Wu and Xia, 1994) or phrases (Kupiec, 1993; Melamed, 1997; Marcu and Wong, 2002; Koehn et al., 2003). However, the availability of large-scale, wide-coverage corpora still remains a challenge even in the era of big data: parallel corpora are usually only existent for resourcerich languages and restricted to limited domains such as government documents and news articles. Therefore, intensive attention has been drawn to exploiting non-parallel corpora for acquiring translation correspondences. Most previous efforts have concentrated on learning parallel lexicons from non-parallel corpora, including parallel sentence and lexicon extraction via bootstrapping (Fung and Cheung, 2004), inducing parallel lexicons via canonical correlation analysis (Haghighi ∗Corresponding author: Yang Liu. et al., 2008), training IBM models on monolingual corpora as decipherment (Ravi and Knight, 2011; Nuhn et al., 2012; Dou et al., 2014), and deriving parallel lexicons from bilingual word embeddings (Vuli´c and Moens, 2013; Mikolov et al., 2013; Vuli´c and Moens, 2015). Recently, a number of authors have turned to a more challenging task: learning parallel phrases from non-parallel corpora (Zhang and Zong, 2013; Dong et al., 2015). Zhang and Zong (2013) present a method for retrieving parallel phrases from non-parallel corpora using a seed parallel lexicon. Dong et al. (2015) continue this line of research to further introduce an iterative approach to joint learning of parallel lexicons and phrases. They introduce a corpus-level latentvariable translation model in a non-parallel scenario and develop a training algorithm that alternates between (1) using a parallel lexicon to extract parallel phrases from non-parallel corpora and (2) using the extracted parallel phrases to enlarge the parallel lexicon. They show that starting from a small seed lexicon, their approach is capable of learning both new words and phrases gradually over time. However, due to the structural divergence between natural languages as well as the presence of noisy data, only using asymmetric translation models might be insufficient to accurately identify parallel lexicons and phrases from non-parallel corpora. Dong et al. (2015) report that the accuracy on Chinese-English dataset is only around 40% after running for 70 iterations. In addition, their approach seems prone to be affected by noisy data in non-parallel corpora as the accuracy drops significantly with the increase of noise. Since asymmetric word alignment and phrase alignment models are usually complementary, it is natural to combine them to make more accurate predictions. In this work, we propose to in1024 troduce agreement-based learning (Liang et al., 2006; Liang et al., 2008) into extracting parallel lexicons and phrases from non-parallel corpora. Based on the latent-variable model proposed by Dong et al. (2015), we propose two kinds of loss functions to take into account the agreement between both phrase alignment and word alignment in two directions. As the inference is intractable, we resort to a Viterbi EM algorithm to train the two models efficiently. Experiments on the Chinese-English dataset show that agreementbased learning is more robust to noisy data and leads to substantial improvements in phrase alignment and machine translation evaluations. 2 Background Given a monolingual corpus of source language phrases E = {e(s)}S s=1 and a monolingual corpus of target language phrases F = {f(t)}T t=1, we assume there exists a parallel corpus D = {⟨e(s), f(t)⟩|e(s) ↔f(t)}, where e(s) ↔f(t) denotes that e(s) and f(t) are translations of each other. As a long sentence in E is usually unlikely to have an translation in F and vise versa, most previous efforts build on the assumption that phrases are more likely to have translational equivalents on the other side (Munteanu and Marcu, 2006; Cettolo et al., 2010; Zhang and Zong, 2013; Dong et al., 2015). Such a set of phrases can be constructed by collecting either constituents of parsed sentences or strings with hyperlinks on webpages (e.g., Wikipedia). Therefore, we assume the two monolingual corpora are readily available and focus on how to extract D from E and F. To address this problem, Dong et al. (2015) introduce a corpus-level latent-variable translation model in a non-parallel scenario: P(F|E; θ) = X m P(F, m|E; θ) | {z } phrase alignment , (1) where m is phrase alignment and θ is a set of model parameters. Each target phrase f(t) is restricted to connect to exactly one source phrase: m = (m1, . . . , mt, . . . mT ), where mt ∈ {0, 1, . . . , S}. For example, mt = s denotes that f(t) is aligned to e(s). Note that e(0) represents an empty source phrase. They follow IBM Model 1 (Brown et al., 1993) to further decompose the model as P(F, m|E; θ) = p(T|S) (S + 1)T T Y t=1 P(f(t)|e(mt); θ), (2) where P(f(t)|e(mt); θ) is a phrase translation model that can be further defined as P(f(t)|e(mt); θ) = δ(mt, 0)ϵ + (1 −δ(mt, 0)) X a P(f(t), a|e(mt); θ) | {z } word alignment . (3) Dong et al. (2015) distinguish between empty and non-empty phrase translations. If a target phrase f(t) is aligned to the empty source phrase e(0) (i.e., mt = 0), they set the phrase translation probability to a fixed number ϵ. Otherwise, conventional word alignment models such as IBM Model 1 can be used for non-empty phrase translation: P(f(t), a|e(mt); θ) = p(J(t)|I(mt)) (I(mt) + 1)J(t) J(t) Y j=1 p(f(t) j |e(mt) aj ), (4) where p(J|I) is a length model and p(f|e) is a translation model. We use J(t) to denote the length of f(t). Therefore, the latent-variable model involves two kinds of latent structures: (1) phrase alignment m between source and target phrases, (2) word alignment a between source and target words within phrases. Given the two monolingual corpora E and F, the training objective is to maximize the likelihood of the training data: θ∗ = argmax θ n L(θ) o , (5) where L(θ) = log P(F|E; θ) − X I λI  X J p(J|I) −1  − X e γe  X f p(f|e) −1  − X f X e σ(f, e, d) log σ(f, e, d) p(f|e) .(6) 1025 genju guonei de zhidu zhengzhi canyu siyue ershierri guoji hezuo jiejue maoyi jiufen April 22 fiscal crisis political participation resolve trade disputes social stability genju guonei de zhidu zhengzhi canyu siyue ershierri guoji hezuo jiejue maoyi jiufen April 22 fiscal crisis political participation resolve trade disputes social stability (a) (b) Figure 1: Agreement between (a) Chinese-to-English and (b) English-to-Chinese phrase alignments. The arrows indicate translation directions. The links on which two models agree are highlighted in bold red. The outer agreement loss function (see Eq. (14)) aims to encourage the agreement at the phrase level. Note that d is a small seed parallel lexicon for initializing training 1 and σ(f, e, d) checks whether an entry ⟨f, e⟩exists in d. Given the monolingual corpora and the optimized model parameters, the Viterbi phrase alignment is calculated as m∗ = argmax m n P(F, m|E; θ∗) o (7) = argmax m ( T Y t=1 P(f(t)|e(mt); θ∗) ) .(8) Finally, parallel lexicons can be derived from the translation probability table of IBM model 1 θ∗and parallel phrases can be collected from the Viterbi phrase alignment m∗. This process iterates and enlarges parallel lexicons and phrases gradually over time. As it is very challenging to extract parallel phrases from non-parallel corpora, unidirectional models might only capture partial aspects of translation modeling on non-parallel corpora. Indeed, Dong et al. (2015) find that the accuracy of phrase alignment is only around 50% on the ChineseEnglish dataset. More importantly, their approach seems to be vulnerable to noise as the accuracy drops significantly with the increase of noise. As source-to-target and target-to-source translation models are usually complementary (Och and Ney, 2003; Koehn et al., 2003; Liang et al., 2006), 1Due to the difficulty of learning translation correspondences from non-parallel corpora, many authors have assumed that a small seed lexicon is readily available (Gaussier et al., 2004; Zhang and Zong, 2013; Vuli´c and Moens, 2013; Mikolov et al., 2013; Dong et al., 2015). it is appealing to combine them to improve alignment accuracy. 3 Approach 3.1 Agreement-based Learning The basic idea of our work is to encourage the source-to-target and target-to-source translation models to agree on both phrase and word alignments. For example, Figure 1 shows two example Chinese-to-English and English-to-Chinese phrase alignments on the same non-parallel data. As each model only captures partial aspects of translation modeling, our intuition is that the links on which two models agree (highlighted in red) are more likely to be correct. More formally, let P(F|E; −→θ ) be a sourceto-target translation model and P(E|F; ←−θ ) be a target-to-source model, where −→θ and ←−θ are corresponding model parameters. We use −→ m = (−→ m1, . . . , −→ mt, . . . , −→ mT ) to denote sourceto-target phrase alignment. Likewise, the targetto-source phrase alignment is denoted by ←− m = (←− m1, . . . , ←− ms, . . . , ←− mS). To ease the comparison between −→ m and ←− m, we represent them as sets of non-empty links equivalently: −→ m = n ⟨−→ mt, t⟩|−→ mt ̸= 0 o (9) ←− m = n ⟨s, ←− ms⟩|←− ms ̸= 0 o . (10) For example, suppose the source-to-target and target-to-source phrase alignments are −→ m = 1026 1: procedure VITERBIEM(E, F, d) 2: Initialize Θ(0) 3: for all k = 1, . . . , K do 4: ˆm(k) ←SEARCH(E, F, Θ(k−1)) 5: Θ(k) ←UPDATE(E, F, d, ˆm(k)) 6: end for 7: return ˆm(K), Θ(K) 8: end procedure Figure 2: A Viterbi EM algorithm for agreementbased learning of parallel lexicons and phrases from non-parallel corpora. F and E are nonparallel corpora, d is a seed parallel lexicon, Θ(k) is the set of model parameters at the k-th iteration, ˆm(k) is the Viterbi phrase alignment on which two models agree at the k-th iteration. (2, 3, 0, 0) and ←− m = (0, 1, 2). The equivalent link sets are −→ m = {⟨2, 1⟩, ⟨3, 2⟩} and ←− m = {⟨2, 1⟩, ⟨3, 2⟩}. Therefore, −→ m is said to be equal to ←− m (i.e., δ(−→ m, ←− m) = 1). Following Liang et al. (2006), we introduce a new training objective that favors the agreement between two unidirectional models: J (−→θ , ←−θ ) = log P(F|E; −→θ ) + log P(E|F; ←−θ ) − log X −→ m,←− m P(−→ m|E, F; −→θ )P(←− m|F, E; ←−θ ) ×∆(E, F, −→ m, ←− m, −→θ , ←−θ ), (11) where the posterior probabilities in two directions are defined as P(−→ m|E, F; −→θ ) = T Y t=1 P(f(t)|e(−→ mt); −→θ ) PS s=0 P(f(t)|e(s); −→θ ) (12) P(←− m|F, E; ←−θ ) = S Y s=1 P(e(s)|f(←− ms); ←−θ ) PT t=0 P(e(s)|f(t); ←−θ ) . (13) The loss function ∆(E, F, −→ m, ←− m, −→θ , ←−θ ) measures the disagreement between the two models. 3.2 Outer Agreement 3.2.1 Definition A straightforward loss function is to force the two models to generate identical phrase alignments: ∆outer(E, F, −→ m, ←− m, −→θ , ←−θ ) = 1 −δ(−→ m, ←− m). (14) We refer to Eq. (14) as outer agreement since it only considers phrase alignment and ignores the word alignment within aligned phrases. 3.2.2 Training Objective Since the outer agreement forces two models to generate identical phrase alignments, the training objective can be written as Jouter(−→θ , ←−θ ) = log P(F|E; −→θ ) + log P(E|F; ←−θ ) + log X m P(m|E, F; −→θ )P(m|F, E; ←−θ ), (15) where m is a phrase alignment on which two models agree. The partial derivatives of the training objective with respect to source-to-target model parameters −→θ are given by ∂Jouter(−→θ , ←−θ ) ∂−→θ = ∂P(F|E; −→θ )/∂−→θ P(F|E; −→θ ) + Em|F,E;←− θ h ∂P(F|E; −→θ )/∂−→θ i P m P(m|E, F; −→θ )P(m|F, E; ←−θ ) . (16) The partial derivatives with respect to ←−θ are defined likewise. 3.2.3 Training Algorithm As the expectation in Eq. (16) is usually intractable to calculate due to the exponential search space of phrase alignment, we follow Dong et al. (2015) to use a Viterbi EM algorithm instead. As shown in Figure 2, the algorithm takes a set of source phrases E, a set of target phrases F, and a seed parallel lexicon d as input (line 1). After initializing model parameters Θ = {−→θ , ←−θ } (line 2), the algorithm calls the procedure ALIGN(F, E, Θ) to compute the Viterbi phrase alignment between E and F on which two models agree. Then, the algorithm updates the two models by normalizing counts collected from the Viterbi phrase alignment. The process iterates for K iterations and returns the final Viterbi phrase alignment and model parameters. 3.2.4 Computing Viterbi Phrase Alignments The procedure ALIGN(F, E, Θ) computes the Viterbi phrase alignment ˆm between E and F on which two models agree as follows: ˆm = argmax m n P(m|E, F; −→θ ) × P(m|F, E; ←−θ ) o . (17) 1027 Unfortunately, due to the exponential search space of phrase alignment, computing ˆm is also intractable. As a result, we approximate it as the intersection of two unidirectional Viterbi phrase alignments: ˆm ≈−→ m∗∩←− m∗, (18) where the unidirectional Viterbi phrase alignments are calculated as −→ m∗= argmax −→ m ( T Y t=1 P(f(t)|e(−→ mt); −→θ ) ) (19) ←− m∗= argmax ←− m ( S Y s=1 P(e(s)|f(←− ms); ←−θ ) ) . (20) The source-to-target Viterbi phrase alignment is calculated as −→ m∗= argmax −→ m n P(−→ m|E, F; −→θ ) o (21) = argmax −→ m n T Y t=1 P(f(t)|e(− → mt); −→θ ) o . (22) Dong et al. (2015) indicate that computing the Viterbi alignment for individual target phrases is independent and only need to focus on finding the most probable source phrase for each target phrase: −→ m∗ t = argmax s∈{0,1,...,S} n P(f(t)|e(s); −→θ ) o . (23) This can be cast as a translation retrieval problem (Zhang and Zong, 2013; Dong et al., 2014). Please refer to (Dong et al., 2015) for more details. The target-to-source Viterbi phrase alignment can be calculated similarly. 3.2.5 Updating Model Parameters Following Liang et al. (2006), we collect counts of model parameters only from the agreement term.2 Given the agreed Viterbi phrase alignment ˆm, the count of the source-to-target length model p(J|I) is given by c(J|I; E, F) = X ⟨s,t⟩∈ˆm δ(J(t), J)δ(I(s), I). (24) The new length probabilities can be obtained by p(J|I) = c(J|I; E, F) P J′ c(J′|I; E, F). (25) 2We experimented with collecting counts from both the unidirectional and agreement terms but obtained much worse results than counting only from the agreement term. jiejue maoyi jiufen resolve trade disputes jiejue maoyi jiufen resolve trade disputes (a) (b) Figure 3: Agreement between (a) Chinese-toEnglish and (b) English-to-Chinese word alignments. The links on which two models agree are highlighted in red. The inner agreement loss function (see Eq. (28)) aims to encourage the agreement at both the phrase and word levels. The count of the source-to-target translation model p(f|e) is given by c(f|e; E, F) = X ⟨s,t⟩∈ˆm p(f|e) PI(s) i=0 p(f|e(s) i ) × J(t) X j=1 δ(f, f(t) j ) I(s) X i=0 δ(e, e(s) i ) +σ(f, e, d). (26) The new translation probabilities can be obtained by p(f|e) = c(f|e; E, F) P f′ c(f′|e; E, F). (27) Counts of target-to-source length and translation models can be calculated in a similar way. 3.3 Inner Agreement 3.3.1 Definition As the outer agreement only considers the phrase alignment, the inner agreement takes both phrase alignment and word alignment into consideration: ∆inner(E, F, −→ m, ←− m, −→θ , ←−θ ) = −δ(−→ m, ←− m) × X ⟨s,t⟩∈−→ m X −→a ,←−a P(−→a |e(s), f(t); −→θ ) × P(←−a |f(t), e(s); ←−θ ) × δ(−→a , ←−a ). (28) For example, Figure 3 shows two examples of Chinese-to-English and English-to-Chinese word alignments. The shared links are highlighted in 1028 red. Our intuition is that a source phrase and a target phrase are more likely to be translations of each other if the two translation models also agree on word alignment within aligned phrases. 3.3.2 Training Objective and Algorithm The training objective for inner agreement is given by Jinner(−→θ , ←−θ ) = log P(F|E; −→θ ) + log P(E|F; ←−θ ) + log X m P(m|E, F; −→θ )P(m|F, E; ←−θ ) × X ⟨s,t⟩∈m X a P(a|e(s), f(t); −→θ ) × P(a|f(t), e(s); ←−θ ). (29) We still use the Viterbi EM algorithm as shown in Figure 2 for training the two models. 3.3.3 Computing Viterbi Phrase Alignments The agreed Viterbi phrase alignment is defined as ˆm = argmax m n P(m|E, F; −→θ )P(m|F, E; ←−θ ) × X ⟨s,t⟩∈m X a P(a|e(s), f(t); −→θ ) ×P(a|f(t), e(s); ←−θ ) o . (30) As computing ˆm is intractable, we still approximate it using the intersection of two unidirectional Viterbi phrase alignments (see Eq. (18)). The source-to-target Viterbi phrase alignment is calculated as −→ m∗= argmax −→ m n P(−→ m|E, F; −→θ ) × X ⟨s,t⟩∈− → m J(t) X j=1 I(s) X i=1 P(⟨i, j⟩|e(s), f(t); −→θ ) × P(⟨i, j⟩|f(t), e(s); ←−θ ) o , (31) where P(⟨i, j⟩|e(s), f(t); −→θ ) is source-to-target link posterior probability of the link ⟨i, j⟩being present (or absent) in the word alignment according to the source-to-target model, P(⟨i, j⟩|f(t), e(s); ←−θ ) is target-to-source link posterior probability. We follow Liang et al. (2006) to use the product of link posteriors to encourage the agreement at the level of word alignment. We use a coarse-to-fine approach (Dong et al., 2015) to compute the Viterbi alignment: first retrieving a coarse set of candidate source phrases using translation probabilities and then selecting the candidate with the highest score according to Eq. (31). The target-to-source Viterbi phrase alignment can be calculated similarly. 3.3.4 Updating Model Parameters Given the agreed Viterbi phrase alignment ˆm, the count of the source-to-target length model p(J|I) is still given by Eq. (24). The count of the translation model p(f|e) is calculated as c(f|e; E, F) = X ⟨s,t⟩∈ˆm I(s) X i=1 J(t) X j=1 P(⟨i, j⟩|e(s), f(t); −→θ ) × P(⟨i, j⟩|f(t), e(s); ←−θ ) × δ(f, f(t))δ(e, e(s)) +σ(f, e, d). (32) Counts of target-to-source length and translation models can be calculated in a similar way. 4 Experiments In this section, we evaluate our approach in two tasks: phrase alignment (Section 4.1) and machine translation (Section 4.2). 4.1 Alignment Evaluation 4.1.1 Evaluation Metrics Given two monolingual corpora E and F, we suppose there exists a ground truth parallel corpus G and denote an extracted parallel corpus as D. The quality of an extracted parallel corpus can be measured by F1 = 2|D ∩G|/(|D| + |G|). 4.1.2 Data Preparation Although it is appealing to apply our approach to dealing with real-world non-parallel corpora, it is time-consuming and labor-intensive to manually construct a ground truth parallel corpus. Therefore, we follow Dong et al. (2015) to build synthetic E, F, and G to facilitate the evaluation. We first extract a set of parallel phrases from a sentence-level parallel corpus using the stateof-the-art phrase-based translation system Moses (Koehn et al., 2007) and discard low-probability parallel phrases. Then, E and F can be constructed by corrupting the parallel phrase set by 1029 1 2 3 4 5 6 7 8 9 10 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 iteration agreement ratio inner outer no agreement Figure 4: Comparison of agreement ratios on the development set. seed C →E E →C Outer Inner 50 4.1 4.8 60.8 66.2 100 5.1 5.5 65.6 69.8 500 7.5 8.4 70.4 72.5 1,000 22.4 23.1 73.6 74.3 Table 1: Effect of seed lexicon size in terms of F1 on the development set. adding irrelevant source and target phrases randomly. Note that the parallel phrase set can serve as the ground truth parallel corpus G. We refer to the non-parallel phrases in E and F as noise. From LDC Chinese-English parallel corpora, we constructed a development set and a test set. The development set contains 20K parallel phrases, 20K noisy Chinese phrases, and 20K noisy English phrases. The test test contains 20K parallel phrases, 180K noisy Chinese phrases, and 180K noisy English phrases. The seed parallel lexicon contains 1K entries. 4.1.3 Comparison of Agreement Ratios We introduce agreement ratio to measure to what extent two unidirectional models agree on phrase alignment: ratio = 2|−→ m∗∩←− m∗| |−→ m∗| + |←− m∗|. (33) Figure 4 shows the agreement ratios of independent training (“no agreement”), joint training with the outer agreement (“outer”), and joint training with the inner agreement (“inner”). We find that independently trained unidirectional models noise C →E E →C Outer Inner C E 0 0 58.5 61.2 86.5 86.1 0 10K 41.0 54.4 83.6 83.8 0 20K 28.3 48.3 80.1 81.2 10K 0 54.7 43.1 84.9 84.3 20K 0 50.4 31.4 83.8 83.6 10K 10K 34.9 34.4 80.0 79.7 20K 20K 22.4 23.1 73.6 74.3 Table 2: Effect of noise in terms of F1 on the development set. hardly agree on phrase alignment, suggesting that each model can only capture partial aspects of translation modeling on non-parallel corpora. In contrast, imposing the agreement term significantly increases the agreement ratios: after 10 iterations, about 40% of phrase alignment links are shared by two models. 4.1.4 Effect of Seed Lexicon Size Table 1 shows the F1 scores of the Chinese-toEnglish model (“C →E”), the English-to-Chinese model (“E →C”), joint learning based on the outer agreement (“outer”), and jointing learning based on the inner agreement (“inner”) over various sizes of seed lexicons on the development set. We find that agreement-based learning obtains substantial improvements over independent learning across all sizes. More importantly, even with a seed lexicon containing only 50 entries, agreement-based learning is able to achieve F1 scores above 60%. The inner agreement performs better than the outer agreement by taking the consensus at the word level into account. 4.1.5 Effect of Noise Table 2 demonstrates the effect of noise on the development set. In row 1, “0+0” denotes there is no noise, which can be seen as an upper bound. Adding noise, either on the Chinese side or on the English side, deteriorates the F1 scores for all methods. Adding noise on the English side makes predicting phrase alignment in the C →E direction more challenging due to the enlarged search space. The situation is similar in the reverse direction. It is clear that agreement-based learning is more robust to noise: while independent training suffers from a reduction of 40% in terms of F1 for the “20K + 20K” setting, agreement-based learning still achieves F1 scores over 70%. 1030 1 2 3 4 5 6 7 8 9 10 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 iteration F1 inner outer E−>C C−>E Figure 5: Comparison of F1 scores on the test set. Chinese jingji English economy Chinese jialebi English caribbean Chinese zhengzhi huanjing English political environment Chinese jiaoyisuo shichang jiage zhishu English exchange market price index Chinese qianding bianjing maoyi xieding English signed border trade agreements Table 3: Example learned parallel lexicons and phrases. New words that are not included in the seed lexicon are highlighted in italic. 4.1.6 Results Figure 5 gives the final results on the test set. We find that agreement-based training achieves significant improvements over independent training. By considering the consensus on both phrase and word alignments, the inner agreement significantly outperforms the outer agreement. Notice that Dong et al. (2015) only add noise on one side while we add noisy phrases on both sides, which makes phrase alignment more challenging. Table 3 shows example learned parallel words and phrases. The lexicon is built from the translation table by retaining high-probability word pairs. Therefore, our approach is capable of learning both new words and new phrases unseen in the seed lexicon. 4.2 Translation Evaluation Following Zhang and Zong (2013) and Dong et al. (2015), we evaluate our approach on domain adaptation for machine translation. The data set consists of two in-domain nonparallel corpora and an out-domain parallel corpus. The in-domain non-parallel corpora consists of 2.65M Chinese phrases and 3.67M English phrases extracted from LDC news articles. We use a small out-domain parallel corpus extracted from financial news of FTChina which contains 10K phrase pairs. The task is to extract a parallel corpus from in-domain non-parallel corpora starting from a small out-domain parallel corpus. We use the state-of-the-art translation system Moses (Koehn et al., 2007) and evaluate the performance on Chinese-English NIST datasets. The development set is NIST 2006 and the test set is NIST 2005. The evaluation metric is caseinsensitive BLEU4 (Papineni et al., 2002). We use the SRILM toolkit (Stolcke, 2002) to train a 4-gram English language model on a monolingual corpus with 399M English words. Table 4 shows the results. At iteration 0, only the out-domain corpus is used and the BLEU score is 5.61. All methods iteratively extract parallel phrases from non-parallel corpora and enlarge the extracted parallel corpus. We find that agreementbased learning achieves much higher BLEU scores while obtains a smaller parallel corpus as compared with independent learning. One possible reason is that the agreement-based learning rules out most unlikely phrase pairs by encouraging consensus between two models. 5 Conclusion We have presented agreement-based training for learning parallel lexicons and phrases from nonparallel corpora. By modeling the agreement on both phrase alignment and word alignment, our approach achieves significant improvements in both alignment and translation evaluations. In the future, we plan to apply our approach to real-world non-parallel corpora to further verify its effectiveness. It is also interesting to extend the phrase translation model to more sophisticated models such as IBM models 2-5 (Brown et al., 1993) and HMM (Vogel and Ney, 1996). Acknowledgments We sincerely thank the reviewers for their valuable suggestions. We also thank Meng Zhang, Yankai Lin, Shiqi Shen, Meiping Dong and Congyu Fu for their insightful discussions. Yang Liu is sup1031 Iteration Corpus Size BLEU E→C C→E Outer Inner E→C C→E Outer Inner 0 10k 5.61 1 145k 162k 59k 73k 8.65 8.90 13.53 13.74 2 195k 215k 69k 101k 8.82 9.47 15.26 15.61 3 209k 231k 88k 132k 8.42 9.29 16.88 16.94 4 214k 238k 106k 159k 8.46 9.27 17.15 17.83 5 217k 241k 123k 181k 8.87 9.40 17.94 18.89 6 219k 243k 137k 197k 8.52 9.30 18.56 19.47 7 222k 247k 140k 207k 8.81 9.22 18.72 19.46 8 224k 249k 153k 220k 8.71 9.26 18.84 19.50 9 227k 251k 159k 233k 8.92 9.35 19.05 19.63 10 229k 254k 163k 239k 8.33 9.06 19.39 19.78 Table 4: Results on domain adaptation for machine translation. ported by the National Natural Science Foundation of China (No. 61522204), the 863 Program (2015AA011808), and Samsung R&D Institute of China. Huanbo Luan is supported by the National Natural Science Foundation of China (No. 61303075). Maosong Sun is supported by the Major Project of the National Social Science Foundation of China (13&ZD190). References Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguisitics. Mauro Cettolo, Marcello Federico, and Nicola Bertoldi. 2010. Mining parallel fragments from comparable texts. In Proceedings of IWSLT. Meiping Dong, Yong Cheng, Yang Liu, Jia Xu, Maosong Sun, Tatsuya Izuha, and Jie Hao. 2014. Query lattice for translation retrieval. In Proceedings of COLING. Meiping Dong, Yang Liu, Huanbo Luan, Maosong Sun, Tatsuya Izuha, and Dakun Zhang. 2015. Iterative learning of parallel lexicons and phrases from non-parallel corpora. In Proceedings of IJCAI. Qing Dou, Ashish Vaswani, and Kevin Knight. 2014. Beyond parallel data: Joint word alignment and decipherment improves machine translation. In Proceedings of EMNLP. Pascale Fung and Percy Cheung. 2004. Mining verynon-parallel corpora: Parallel sentence and lexicon extraction via bootstrapping and em. In Proceedings of EMNLP. Eric Gaussier, J.M. Renders, I. Matveeva, C. Goutte, and H. Dejean. 2004. A geometric view on bilingual lexicon extraction from comparable corpora. In Proceedings of ACL. Aria Haghighi, Percy Liang, Taylor Berg-Kirkpatrick, and Dan Klein. 2008. Learning bilingual lexicons from monolingual corpora. In Proceedings of ACL. Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of NAACL. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions. Julian Kupiec. 1993. An algorithm for finding noun phrase correspondences in bilingual corpora. In Proceedings of ACL. Percy Liang, Ben Taskar, and Dan Klein. 2006. Alignment by agreement. In Proceedings of NAACL. Percy Liang, Dan Klein, and I. Jordan, Michael. 2008. Alignment-based learning. In Proceedings of NIPS. Daniel Marcu and Daniel Wong. 2002. A phrase-based joint probability model for statistical machine translation. In Proceedings of EMNLP. I. Dan Melamed. 1997. Automatic discovery of noncompositional compounds in parallel data. In Proceedings of EMNLP. Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for machine translation. arXiv:1309.4168. 1032 Dragos Stefan Munteanu and Daniel Marcu. 2006. Extracting parallel sub-sentential fragments from nonparallel corpora. In Proceedings of ACL. Malte Nuhn, Arne Mauser, and Hermann Ney. 2012. Deciphering foreign language by combining language models and context vectors. In Proceedings of ACL. Franz J. Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguisitics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a methof for automatic evaluation of machine translation. In Proceedings of ACL. Sujith Ravi and Kevin Knight. 2011. Deciphering foreign language. In Proceedings of ACL. Frank Smadja and Kathleen McKeown. 1994. Translating collocations for use in bilingual lexicons. In Proceedings of the ARPA Human Language Technology Workshop. Andreas Stolcke. 2002. Srilm - an extensible language modeling toolkit. In Proceedings of ICSLP. Stephan Vogel and Hermann Ney. 1996. Hhm-based word alignment in statistical translation. In Proceedings of COLING. Ivan Vuli´c and Marie-Francine Moens. 2013. A study on bootstrapping bilingual vector spaces from nonparallel data (and nothing else). In Proceedings of EMNLP. Ivan Vuli´c and Marie-Francine Moens. 2015. Bilingual word embeddings from non-parallel documentaligned data applied to bilingual lexicon induction. In Proceedings of ACL. Dekai Wu and Xuanyin Xia. 1994. Learning an english-chinese lexicon from a parallel corpus. In Proceedings of the ARPA Human Language Technology Workshop. Jiajun Zhang and Chengqing Zong. 2013. Learning a phrase-based translation model from monolingual data with application to domain adaptation. In Proceedings of ACL. 1033
2016
97
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1034–1043, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Deep Fusion LSTMs for Text Semantic Matching Pengfei Liu, Xipeng Qiu∗, Jifan Chen, Xuanjing Huang Shanghai Key Laboratory of Intelligent Information Processing, Fudan University School of Computer Science, Fudan University 825 Zhangheng Road, Shanghai, China {pfliu14,xpqiu,jfchen14,xjhuang}@fudan.edu.cn Abstract Recently, there is rising interest in modelling the interactions of text pair with deep neural networks. In this paper, we propose a model of deep fusion LSTMs (DF-LSTMs) to model the strong interaction of text pair in a recursive matching way. Specifically, DF-LSTMs consist of two interdependent LSTMs, each of which models a sequence under the influence of another. We also use external memory to increase the capacity of LSTMs, thereby possibly capturing more complicated matching patterns. Experiments on two very large datasets demonstrate the efficacy of our proposed architecture. Furthermore, we present an elaborate qualitative analysis of our models, giving an intuitive understanding how our model worked. 1 Introduction Among many natural language processing (NLP) tasks, such as text classification, question answering and machine translation, a common problem is modelling the relevance/similarity of a pair of texts, which is also called text semantic matching. Due to the semantic gap problem, text semantic matching is still a challenging problem. Recently, deep learning is rising a substantial interest in text semantic matching and has achieved some great progresses (Hu et al., 2014; Qiu and Huang, 2015; Wan et al., 2016). According to their interaction ways, previous models can be classified into three categories: Weak interaction Models Some early works focus on sentence level interactions, such as ARCI(Hu et al., 2014), CNTN(Qiu and Huang, 2015) ∗Corresponding author and so on. These models first encode two sequences into continuous dense vectors by separated neural models, and then compute the matching score based on sentence encoding. In this paradigm, two sentences have no interaction until arriving final phase. Semi-interaction Models Another kind of models use soft attention mechanism to obtain the representation of one sentence by depending on representation of another sentence, such as ABCNN (Yin et al., 2015), Attention LSTM (Rockt¨aschel et al., 2015; Hermann et al., 2015). These models can alleviate the weak interaction problem to some extent. Strong Interaction Models Some models build the interaction at different granularity (word, phrase and sentence level), such as ARC-II (Hu et al., 2014), MultiGranCNN (Yin and Sch¨utze, 2015), Multi-Perspective CNN (He et al., 2015), MV-LSTM (Wan et al., 2016), MatchPyramid (Pang et al., 2016). The final matching score depends on these different levels of interactions. In this paper, we adopt a deep fusion strategy to model the strong interactions of two sentences. Given two texts x1:m and y1:n, we define a matching vector hi,j to represent the interaction of the subsequences x1:i and y1:j. hi,j depends on the matching vectors hs,t on previous interactions 1 ≤s < i and 1 ≤t < j. Thus, text matching can be regarded as modelling the interaction of two texts in a recursive matching way. Following this idea, we propose deep fusion long short-term memory neural networks (DFLSTMs) to model the interactions recursively. More concretely, DF-LSTMs consist of two interconnected conditional LSTMs, each of which models a piece of text under the influence of another. The output vector of DF-LSTMs is fed into a task-specific output layer to compute the match1034 Gymnast get ready for a competition Female gymnast warm up before a competition   Figure 1: A motivated example to illustrate our recursive composition mechanism. ing score. The contributions of this paper can be summarized as follows. 1. Different with previous models, DF-LSTMs model the strong interactions of two texts in a recursive matching way, which consist of two inter- and intra-dependent LSTMs. 2. Compared to the previous works on text matching, we perform extensive empirical studies on two very large datasets. Experiment results demonstrate that our proposed architecture is more effective. 3. We present an elaborate qualitative analysis of our model, giving an intuitive understanding how our model worked. 2 Recursively Text Semantic Matching To facilitate our model, we firstly give some definitions. Given two sequences X = x1, x2, · · · , xm and Y = y1, y2, · · · , yn, most deep neural models try to represent their semantic relevance by a matching vector h(X, Y ), which is followed by a score function to calculate the matching score. The weak interaction methods decompose matching vector by h(X, Y ) = f(h(X), h(Y )), where function f(·) may be one of some basic operations or the combination of them: concatenation, affine transformation, bilinear, and so on. In this paper, we propose a strong interaction of two sequences to decompose matching vector h(X, Y ) in a recursive way. We refer to the interaction of the subsequences x1:i and y1:j as hi,j(X, Y ), which depends on previous interactions hs,t(X, Y ) for 1 ≤s < i and 1 ≤t < j. Figure 1 gives an example to illustrate this. For sentence pair X =“Female gymnast warm up before a competition”, Y =“Gymnast get ready for a competition”, considering the interaction (h4,4) between x1:4 = “Female gymnast warm up” and y1:4 = “Gymnast get ready for”, which is composed by the interactions between their subsequences (h1,4, · · · , h3,4, h4,1, · · · , h4,3). We can see that a strong interaction between two sequences can be decomposed in recursive topology structure. The matching vector hi,j(X, Y ) can be written as hi,j(X, Y ) = hi,j(X|Y ) ⊕hi,j(Y |X), (1) where hi,j(X|Y ) refers to conditional encoding of subsequence x1:i influenced by y1:j. Meanwhile, hi,j(Y |X) is conditional encoding of subsequence y1:j influenced by subsequence x1:i; ⊕is concatenation operation. These two conditional encodings depend on their history encodings. Based on this, we propose deep fusion LSTMs to model the matching of texts by recursive composition mechanism, which can better capture the complicated interaction of two sentences due to fully considering the interactions between subsequences. 3 Long Short-Term Memory Network Long short-term memory neural network (LSTM) (Hochreiter and Schmidhuber, 1997) is a type of recurrent neural network (RNN) (Elman, 1990), and specifically addresses the issue of learning long-term dependencies. LSTM maintains a memory cell that updates and exposes its content only when deemed necessary. While there are numerous LSTM variants, here we use the LSTM architecture used by (Jozefowicz et al., 2015), which is similar to the architecture of (Graves, 2013) but without peep-hole connections. We define the LSTM units at each time step t to be a collection of vectors in Rd: an input gate it, a forget gate ft, an output gate ot, a memory cell ct and a hidden state ht. d is the number of the LSTM units. The elements of the gating vectors it, ft and ot are in [0, 1]. The LSTM is precisely specified as follows.   ˜ct ot it ft  =   tanh σ σ σ  TA,b  xt ht−1  , (2) 1035 tanh tanh    ix ( ) 1, x i j c  f i o jy  tanh   tanh ... ... ( ) , x i j r   ( ) 2, x i j h  ( ) 1, x i j h  ( ) , x i K j h  ( ) , y i j K h  ( ) , 2 y i j h  ( ) , 1 y i j h  ( ) , x i j c ( ) , x i j h ( ) , y i j h ( ) , y i j c ( ) , 1 y i j c  ( ) , y i j r f i o Figure 2: Illustration of DF-LSTMs unit. ct = ˜ct ⊙it + ct−1 ⊙ft, (3) ht = ot ⊙tanh (ct) , (4) where xt is the input at the current time step; TA,b is an affine transformation which depends on parameters of the network A and b. σ denotes the logistic sigmoid function and ⊙denotes elementwise multiplication. Intuitively, the forget gate controls the amount of which each unit of the memory cell is erased, the input gate controls how much each unit is updated, and the output gate controls the exposure of the internal memory state. The update of each LSTM unit can be written precisely as (ht, ct) = LSTM(ht−1, ct−1, xt). (5) Here, the function LSTM(·, ·, ·) is a shorthand for Eq. (2-4). LSTM can map the input sequence of arbitrary length to a fixed-sized vector, and has been successfully applied to a wide range of NLP tasks, such as machine translation (Sutskever et al., 2014), language modelling (Sutskever et al., 2011), text matching (Rockt¨aschel et al., 2015) and text classification (Liu et al., 2015). 4 Deep Fusion LSTMs for Recursively Semantic Matching To deal with two sentences, one straightforward method is to model them with two separate LSTMs. However, this method is difficult to model local interactions of two sentences. Following the recursive matching strategy, we propose a neural model of deep fusion LSTMs (DF-LSTMs), which consists of two interdependent LSTMs to capture the inter- and intrainteractions between two sequences. Figure 2 gives an illustration of DF-LSTMs unit. To facilitate our model, we firstly give some definitions. Given two sequences X = x1, x2, · · · , xn and Y = y1, y2, · · · , ym, we let xi ∈Rd denotes the embedded representation of the word xi. The standard LSTM has one temporal dimension. When dealing with a sentence, LSTM regards the position as time step. At position i of sentence x1:n, the output hi reflects the meaning of subsequence x1:i = x1, · · · , xi. To model the interaction of two sentences in a recursive way, we define hi,j to represent the interaction of the subsequences x1:i and y1:j, which is computed by hi,j = h(x) i,j ⊕h(y) i,j , (6) where h(x) i,j denotes the encoding of subsequence x1:i in the first LSTM influenced by the output of the second LSTM on subsequence y1:j; h(y) i,j is the encoding of subsequence y1:j in the second LSTM influenced by the output of the first LSTM on subsequence x1:i. More concretely, (h(x) i,j , c(x) i,j ) = LSTM(Hi,j, c(x) i−1,j, xi), (7) (h(y) i,j , c(y) i,j ) = LSTM(Hi,j, c(y) i,j−1, xj), (8) where Hi,j is information consisting of history states before position (i, j). The simplest setting is Hi,j = h(x) i−1,j ⊕h(y) i,j−1. In this case, our model can be regarded as grid LSTMs (Kalchbrenner et al., 2015). However, there are total m×n interactions in recursive matching process, LSTM could be stressed to keep these interactions in internal memory. Therefore, inspired by recent neural memory network, such as neural Turing machine(Graves et al., 2014) and memory network (Sukhbaatar et al., 2015), we introduce two external memories to keep the history information, which can relieve the pressure on low-capacity internal memory. Following (Tran et al., 2016), we use external memory constructed by history hidden states, which is defined as Mt = {ht−K, . . . , ht−1} ∈RK×d, (9) where K is the number of memory segments, which is generally instance-independent and predefined as hyper-parameter; d is the size of each segment; and ht is the hidden state at time t emitted by LSTM. 1036 At position i, j, two memory blocks M(x), M(y) are used to store contextual information of x and y respectively. M(x) i,j = {h(x) i−K,j, . . . , h(x) i−1,j}, (10) M(y) i,j = {h(y) i,j−K, . . . , h(y) i,j−1}, (11) where h(x) and h(x) are outputs of two conditional LSTMs at different positions. The history information can be read from these two memory blocks. We denote a read vector from external memories as ri,j ∈Rd, which can be computed by soft attention mechanisms. r(x) i,j = a(x) i,j M(x) i,j , (12) r(y) i,j = a(y) i,j M(y) i,j , (13) where ai,j ∈RK represents attention distribution over the corresponding memory Mi,j ∈RK×d. More concretely, each scalar ai,j,k in attention distribution ai,j can be obtained: a(x) i,j,k = softmax(g(M(x) i,j,k, r(x) i−1,j, xi)), (14) a(y) i,j,k = softmax(g(M(y) i,j,k, r(y) i,j−1, yj)), (15) where Mi,j,k ∈Rd represents the k-th row memory vector at position (i, j), and g(·) is an align function defined by g(x, y, z) = vT tanh(Wa[x; y, z]), (16) where v ∈Rd is a parameter vector and Wa ∈ Rd×3d is a parameter matrix. The history information Hi,j in Eq (7) and (8) is computed by Hi,j = r(x) i,j ⊕r(y) i,j . (17) By incorporating external memory blocks, DFLSTMs allow network to re-read history interaction information, therefore it can more easily capture complicated and long-distance matching patterns. As shown in Figure 3, the forward pass of DF-LSTMs can be unfolded along two dimensional ordering. 4.1 Related Models Our model is inspired by some recently proposed models based on recurrent neural network (RNN). One kind of models is multi-dimensional recurrent neural network (MD-RNN) (Graves et al., Female gymnast warm up before a competition Gymnast get ready for a competition Figure 3: Illustration of unfolded DF-LSTMs. 2007; Graves and Schmidhuber, 2009; Byeon et al., 2015) in machine learning and computer vision communities. As mentioned above, if we just use the neighbor states, our model can be regarded as grid LSTMs (Kalchbrenner et al., 2015). What is different is the dependency relations between the current state and history states. Our model uses external memory to increase its memory capacity and therefore can store large useful interactions of subsequences. Thus, we can discover some matching patterns with long dependence. Another kind of models is memory augmented RNN, such as long short-term memory-network (Cheng et al., 2016) and recurrent memory network (Tran et al., 2016), which extend memory network (Bahdanau et al., 2014) and equip the RNN with ability of re-reading the history information. While they focus on sequence modelling, our model concentrates more on modelling the interactions of sequence pair. 5 Training 5.1 Task Specific Output There are two popular types of text matching tasks in NLP. One is ranking task, such as community question answering. Another is classification task, such as textual entailment. We use different ways to calculate matching score for these two types of tasks. 1. For ranking task, the output is a scalar matching score, which is obtained by a linear transformation of the matching vector obtained by FD-LSTMs. 2. For classification task, the outputs are the probabilities of the different classes, which 1037 is computed by a softmax function on the matching vector obtained by FD-LSTMs. 5.2 Loss Function Accordingly, we use two loss functions to deal with different sentence matching tasks. Max-Margin Loss for Ranking Task Given a positive sentence pair (X, Y ) and its corresponding negative pair (X, ˆY ). The matching score s(X, Y ) should be larger than s(X, ˆY ). For this task, we use the contrastive max-margin criterion (Bordes et al., 2013; Socher et al., 2013) to train our model on matching task. The ranking-based loss is defined as L(X, Y, ˆY ) = max(0, 1 −s(X, Y ) + s(X, ˆY )). (18) where s(X, Y ) is predicted matching score for (X, Y ). Cross-entropy Loss for Classification Task Given a sentence pair (X, Y ) and its label l. The output ˆl of neural network is the probabilities of the different classes. The parameters of the network are trained to minimise the cross-entropy of the predicted and true label distributions. L(X, Y ; l,ˆl) = − C X j=1 lj log(ˆlj), (19) where l is one-hot representation of the groundtruth label l; ˆl is predicted probabilities of labels; C is the class number. 5.3 Optimizer To minimize the objective, we use stochastic gradient descent with the diagonal variant of AdaGrad (Duchi et al., 2011). To prevent exploding gradients, we perform gradient clipping by scaling the gradient when the norm exceeds a threshold (Graves, 2013). 5.4 Initialization and Hyperparameters Orthogonal Initialization We use orthogonal initialization of our LSTMs, which allows neurons to react to the diverse patterns and is helpful to train a multi-layer network (Saxe et al., 2013). Unsupervised Initialization The word embeddings for all of the models are initialized with the 100d GloVe vectors (840B token version, (Pennington et al., 2014)). The other parameters are initialized by randomly sampling from uniform distribution in [−0.1, 0.1]. Hyper-parameters MQA RTE K 9 9 Embedding size 100 100 Hidden layer size 50 100 Initial learning rate 0.05 0.005 Regularization 5E−5 1E−5 Table 1: Hyper-parameters for our model on two tasks. Hyperparameters For each task, we used a stacked DF-LSTM and take the hyperparameters which achieve the best performance on the development set via an small grid search over combinations of the initial learning rate [0.05, 0.0005, 0.0001], l2 regularization [0.0, 5E−5, 1E−5, 1E−6] and the values of K [1, 3, 6, 9, 12]. The final hyper-parameters are set as Table 1. 6 Experiment In this section, we investigate the empirical performances of our proposed model on two different text matching tasks: classification task (recognizing textual entailment) and ranking task (matching of question and answer). 6.1 Competitor Methods • Neural bag-of-words (NBOW): Each sequence is represented as the sum of the embeddings of the words it contains, then they are concatenated and fed to a MLP. • Single LSTM: Two sequences are encoded by a single LSTM, proposed by (Rockt¨aschel et al., 2015). • Parallel LSTMs: Two sequences are first encoded by two LSTMs separately, then they are concatenated and fed to a MLP. • Attention LSTMs: Two sequences are encoded by LSTMs with attention mechanism, proposed by (Rockt¨aschel et al., 2015). • Word-by-word Attention LSTMs: An improved strategy of attention LSTMs, which introduces word-by-word attention mechanism and is proposed by (Rockt¨aschel et al., 2015). 1038 Model k Train Test NBOW 100 77.9 75.1 single LSTM (Rockt¨aschel et al., 2015) 100 83.7 80.9 parallel LSTMs (Bowman et al., 2015) 100 84.8 77.6 Attention LSTM (Rockt¨aschel et al., 2015) 100 83.2 82.3 Attention(w-by-w) LSTM (Rockt¨aschel et al., 2015) 100 83.7 83.5 DF-LSTMs 100 85.2 84.6 Table 2: Accuracies of our proposed model against other neural models on SNLI corpus. 6.2 Experiment-I: Recognizing Textual Entailment Recognizing textual entailment (RTE) is a task to determine the semantic relationship between two sentences. We use the Stanford Natural Language Inference Corpus (SNLI) (Bowman et al., 2015). This corpus contains 570K sentence pairs, and all of the sentences and labels stem from human annotators. SNLI is two orders of magnitude larger than all other existing RTE corpora. Therefore, the massive scale of SNLI allows us to train powerful neural networks such as our proposed architecture in this paper. 6.2.1 Results Table 2 shows the evaluation results on SNLI. The 2nd column of the table gives the number of hidden states. From experimental results, we have several experimental findings. The results of DF-LSTMs outperform all the competitor models with the same number of hidden states while achieving comparable results to the state-of-the-art and using much fewer parameters, which indicate that it is effective to model the strong interactions of two texts in a recursive matching way. All models outperform NBOW by a large margin, which indicate the importance of words order in semantic matching. The strong interaction models surpass the weak interaction models, for example, compared with parallel LSTMs, DF-LSTMs obtain improvement by 7.0%. 6.2.2 Understanding Behaviors of Neurons in DF-LSTMs To get an intuitive understanding of how the DFLSTMs work on this problem, we examined the A dog is being chased by a cat dog another by being toy pet with running Dog −0.4 −0.2 0 0.2 0.4 (a) 5-th neuron A family is at the beach feet their at lap waves ocean feeling enjoys family young A −0.2 0 0.2 0.4 0.6 0.8 (b) 11-th neuron Figure 4: Illustration of two interpretable neurons and some word-pairs captured by these neurons. The darker patches denote the corresponding activations are higher. neuron activations in the last aggregation layer while evaluating the test set. We find that some cells are bound to certain roles. We refer to hi,j,k as the activation of the kth neuron at the position of (i, j), where i ∈ {1, . . . , n} and j ∈{1, . . . , m}. By visualizing the hidden state hi,j,k and analyzing the maximum activation, we can find that there exist multiple interpretable neurons. For example, when some contextualized local perspectives are semantically related at point (i, j) of the sentence pair, the activation value of hidden neuron hi,j,k tends to be maximum, meaning that the model could capture some reasoning patterns. Figure 4 illustrates this phenomenon. In Figure 4(a), a neuron shows its ability to monitor the word pairs with the property of describing different things of the same type. The activation in the patch, containing the word pair “(cat, dog)”, is much higher than others. This is an informative pattern for the relation prediction of these two sentences, whose ground truth is contradiction. An interesting thing is there are two “dog” in sentence “ Dog running with pet toy being by another dog”. Our model ignores the useless word, which indicates this neuron selectively captures pattern by contextual understanding, not just word level interaction. In Figure 4(b), another neuron shows that it can capture the local contextual interactions, such as “(ocean waves, beach)”. These patterns can be easily captured by final layer and provide a strong support for the final prediction. 1039 Index of Cell Word or Phrase Pairs Explanation 5-th (jeans, shirt), (dog, cat) (retriever, cat), (stand, sitting) different entities or events of the same type 11-th (pool, swimming), (street, outside) (animal, dog), (grass,outside) word pair related to lexical entailment 20-th (skateboard, skateboarding), (running, runs) (advertisement, ad), (grassy, grass) words with different morphology 49-th (blue, blue), (wearing black, wearing white), (green uniform, red uniform) words related to color 55-th (a man, two other men), (a man, two girls) (Two women, No one) subjects with singular or plural forms Table 3: Multiple interpretable neurons and the word-pairs/phrase-pairs captured by these neurons. The third column gives the explanations of corresponding neuron’s behaviours. Table 3 illustrates multiple interpretable neurons and some representative word or phrase pairs which can activate these neurons. These cases show that our model can capture contextual interactions beyond word level. 6.2.3 Case Study for Attention Addressing Mechanism External memory with attention addressing mechanism enables the network explicitly to utilize the history information of two sentences simultaneously. As a by-product, the obtained attention distribution over history hidden states also help us interpret the network and discover underlying dependencies present in the data. To this end, we randomly sample two good cases with entailment relation from test data and visualize attention distributions over external memory constructed by last 9 hidden states. As shown in Figure 5(a), For the first sentence pair, when the word pair “(competition, competition)” are processed, the model simultaneously selects “warm, before” from one sentence and “gymnast,ready,for” from the other, which are informative patterns and indicate our model has the capacity of capturing phrase-phrase pair. Another case in Figure 5(b) also shows by attention mechanism, the network can sufficiently utilize the history information and the fusion approach allows two LSTMs to share the history information of each other. 6.2.4 Error Analysis Although our model DF-LSTMs are more sensitive to the discrepancy of the semantic capacity between two sentences, some cases still can not be solved by our model. For example, our model gives a wrong prediction of the sentence pair “A golden retriever nurses puppies/Puppies next to their mother”, whose ground truth is entailment. The model fails to realize “nurses” means “next to”. Besides, despite the large size of the training corpus, it’s still very difficult to solve some cases, which depend on the combination of the world knowledge and context-sensitive inferences. For example, given an entailment pair “Several women are playing volleyball/The women are hitting a ball with their arms”, all models predict “neutral”. These analysis suggests that some architectural improvements or external world knowledge are necessary to eliminate all errors instead of simply scaling up the basic model. 6.3 Experiment-II: Matching Question and Answer Matching question answering (MQA) is a typical task for semantic matching (Zhou et al., 2013). Given a question, we need select a correct answer from some candidate answers. In this paper, we use the dataset collected from Yahoo! Answers with the getByCategory function provided in Yahoo! Answers API, which produces 963, 072 questions and corresponding best answers. We then select the pairs in which the length of questions and answers are both in the interval [4, 30], thus obtaining 220, 000 question answer pairs to form the positive pairs. For negative pairs, we first use each question’s best answer as a query to retrieval top 1, 000 re1040 Female gymnast warm up before a competition Gymnast get ready for a competition (a) A female gymnast in black and red being coached on bar skills The female gymnast is training (b) Figure 5: Examples of external memory positions attended when encoding the next word pair (bold and marked by a box) Model k P@1(5) P@1(10) Random Guess 20.0 10.0 NBOW 50 63.9 47.6 single LSTM 50 68.2 53.9 parallel LSTMs 50 66.9 52.1 Attention LSTMs 50 73.5 62.0 Attention(w-by-w) LSTMs 50 75.1 64.0 DF-LSTMs 50 76.5 65.0 Table 4: Results of our proposed model against other neural models on Yahoo! question-answer pairs dataset. sults from the whole answer set with Lucene, where 4 or 9 answers will be selected randomly to construct the negative pairs. The whole dataset1 is divided into training, validation and testing data with proportion 20 : 1 : 1. Moreover, we give two test settings: selecting the best answer from 5 and 10 candidates respectively. 6.3.1 Results Results of MQA are shown in the Table 4. we can see that the proposed model also shows its superiority on this task, which outperforms the stateof-the-arts methods on both metrics (P@1(5) and P@1(10)) with a large margin. By analyzing the evaluation results of questionanswer matching in Table 4, we can see strong interaction models (attention LSTMs, our DFLSTMs) consistently outperform the weak interaction models (NBOW, parallel LSTMs) with a large margin, which suggests the importance of modelling strong interaction of two sentences. 7 Related Work Our model can be regarded as a strong interaction model, which has been explored in previous methods. One kind of methods is to compute similarities between all the words or phrases of the two sentences to model multiple-granularity interactions of two sentences, such as RAE (Socher et 1http://nlp.fudan.edu.cn/data/. al., 2011), Arc-II (Hu et al., 2014),ABCNN (Yin et al., 2015),MultiGranCNN (Yin and Sch¨utze, 2015), Multi-Perspective CNN (He et al., 2015), MV-LSTM (Wan et al., 2016). Socher et al. (2011) firstly used this paradigm for paraphrase detection. The representations of words or phrases are learned based on recursive autoencoders. Hu et al. (2014) proposed to an end-to-end architecture with convolutional neural network (Arc-II) to model multiple-granularity interactions of two sentences. Wan et al. (2016) used LSTM to enhance the positional contextual interactions of the words or phrases between two sentences. The input of LSTM for one sentence does not involve another sentence. Another kind of methods is to model the conditional encoding, in which the encoding of one sentence can be affected by another sentence. Rockt¨aschel et al. (2015) and Wang and Jiang (2015) used LSTM to read pairs of sequences to produce a final representation, which can be regarded as interaction of two sequences. By incorporating an attention mechanism, they got further improvements to the predictive abilities. Different with these two kinds of methods, we model the interactions of two texts in a recursively matching way. Based on this idea, we propose a model of deep fusion LSTMs to accomplish recursive conditional encodings. 8 Conclusion and Future Work In this paper, we propose a model of deep fusion LSTMs to capture the strong interaction for text semantic matching. Experiments on two large scale text matching tasks demonstrate the efficacy of our proposed model and its superiority to competitor models. Besides, our visualization analysis revealed that multiple interpretable neurons in our model can capture the contextual interactions of the words or phrases. 1041 In future work, we would like to investigate our model on more text matching tasks. Acknowledgments We would like to thank the anonymous reviewers for their valuable comments. This work was partially funded by National Natural Science Foundation of China (No. 61532011, 61473092, and 61472088), the National High Technology Research and Development Program of China (No. 2015AA015408). References D. Bahdanau, K. Cho, and Y. Bengio. 2014. Neural machine translation by jointly learning to align and translate. ArXiv e-prints, September. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In NIPS. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Wonmin Byeon, Thomas M Breuel, Federico Raue, and Marcus Liwicki. 2015. Scene labeling with lstm recurrent neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3547–3555. Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long short-term memory-networks for machine reading. arXiv preprint arXiv:1601.06733. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121–2159. Jeffrey L Elman. 1990. Finding structure in time. Cognitive science, 14(2):179–211. Alex Graves and J¨urgen Schmidhuber. 2009. Offline handwriting recognition with multidimensional recurrent neural networks. In Advances in Neural Information Processing Systems, pages 545–552. Alex Graves, Santiago Fern´andez, and J¨urgen Schmidhuber. 2007. Multi-dimensional recurrent neural networks. In Artificial Neural Networks–ICANN 2007, pages 549–558. Springer. Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. arXiv preprint arXiv:1410.5401. Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850. Hua He, Kevin Gimpel, and Jimmy Lin. 2015. Multiperspective sentence similarity modeling with convolutional neural networks. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1576–1586. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1684– 1692. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. In Advances in Neural Information Processing Systems. Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. 2015. An empirical exploration of recurrent network architectures. In Proceedings of The 32nd International Conference on Machine Learning. Nal Kalchbrenner, Ivo Danihelka, and Alex Graves. 2015. Grid long short-term memory. arXiv preprint arXiv:1507.01526. PengFei Liu, Xipeng Qiu, Xinchi Chen, Shiyu Wu, and Xuanjing Huang. 2015. Multi-timescale long short-term memory neural network for modelling sentences and documents. In Proceedings of the Conference on EMNLP. Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Shengxian Wan, and Xueqi Cheng. 2016. Text matching as image recognition. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP 2014), 12:1532–1543. Xipeng Qiu and Xuanjing Huang. 2015. Convolutional neural tensor network architecture for community-based question answering. In Proceedings of International Joint Conference on Artificial Intelligence. Tim Rockt¨aschel, Edward Grefenstette, Karl Moritz Hermann, Tom´aˇs Koˇcisk`y, and Phil Blunsom. 2015. Reasoning about entailment with neural attention. arXiv preprint arXiv:1509.06664. 1042 Andrew M Saxe, James L McClelland, and Surya Ganguli. 2013. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120. Richard Socher, Eric H Huang, Jeffrey Pennin, Christopher D Manning, and Andrew Y Ng. 2011. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Advances in Neural Information Processing Systems. Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In Advances in Neural Information Processing Systems, pages 926–934. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in Neural Information Processing Systems, pages 2431–2439. Ilya Sutskever, James Martens, and Geoffrey E Hinton. 2011. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 1017–1024. Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pages 3104–3112. Ke M. Tran, Arianna Bisazza, and Christof Monz. 2016. Recurrent memory network for language modeling. CoRR, abs/1601.01272. Shengxian Wan, Yanyan Lan, Jiafeng Guo, Jun Xu, Liang Pang, and Xueqi Cheng. 2016. A deep architecture for semantic matching with multiple positional sentence representations. In AAAI. Shuohang Wang and Jing Jiang. 2015. Learning natural language inference with lstm. arXiv preprint arXiv:1512.08849. Wenpeng Yin and Hinrich Sch¨utze. 2015. Convolutional neural network for paraphrase identification. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 901–911. Wenpeng Yin, Hinrich Sch¨utze, Bing Xiang, and Bowen Zhou. 2015. Abcnn: Attention-based convolutional neural network for modeling sentence pairs. arXiv preprint arXiv:1512.05193. Guangyou Zhou, Yang Liu, Fang Liu, Daojian Zeng, and Jun Zhao. 2013. Improving question retrieval in community question answering using world knowledge. In IJCAI. 1043
2016
98
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1044–1053, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Understanding Discourse on Work and Job-Related Well-Being in Public Social Media Tong Liu1∗, Christopher M. Homan1∗, Cecilia Ovesdotter Alm1$, Ann Marie White2, Megan C. Lytle2, Henry A. Kautz3 1∗Golisano College of Computing and Information Sciences, Rochester Institute of Technology 1$ College of Liberal Arts, Rochester Institute of Technology 2 University of Rochester Medical Center 3 Department of Computer Science, University of Rochester [email protected], [email protected], [email protected] annmarie white|megan [email protected] [email protected] Abstract We construct a humans-in-the-loop supervised learning framework that integrates crowdsourcing feedback and local knowledge to detect job-related tweets from individual and business accounts. Using data-driven ethnography, we examine discourse about work by fusing languagebased analysis with temporal, geospational, and labor statistics information. 1 Introduction Work plays a major role in nearly every facet of our lives. Negative and positive experiences at work places can have significant social and personal impacts. Employment condition is an important social determinant of health. But how exactly do jobs influence our lives, particularly with respect to well-being? Many theories address this question (Archambault and Grudin, 2012; Schaufeli and Bakker, 2004), but they are hard to validate as well-being is influenced by many factors, including geography as well as social and institutional support. Can computers help us understand the complex relationship between work and well-being? Both are broad concepts that are difficult to capture objectively (for instance, the unemployment rate as a statistic is continually redefined) and thus challenging subjects for computational research. Our first contribution is to propose a classification framework for such broad concepts as work that alternates between humans-in-the-loop annotation and machine learning over multiple iterations to simultaneously clarify human understanding of these concepts and automatically determine whether or not posts from public social media sites are about work. Our framework balances the effectiveness of crowdsourced workers with local experience, evaluates the degree of subjectivity throughout the process, and uses an iterative posthoc evaluation method to address the problem of discovering gold standard data. Our performance (on an open-domain problem) demonstrates the value of our humans-in-the-loop approach which may be of special relevance to those interested in discourse understanding, particularly settings characterized by high levels of subjectivity, where integrating human intelligence into active learning processes is essential. Our second contribution is to use our classifiers to study job-related discourse on social media using data-driven ethnography. Language is fundamentally a social phenomenon, and social media gives us a lens through which to observe a very particular form of discourse in real time. We add depth to the NLP analysis by gathering data from specific geographical regions to study discourse along a broad spectrum of interacting social groups, using work as a framing device, and we fuse language-based analysis with temporal, geospatial and labor statistics dimensions. 2 Background and Related Work Though not the first study of job-related social media, prior ones used data from large companies’ internal sites, whose users were employees (De Choudhury and Counts, 2013; Yardi et al., 2008; Kolari et al., 2007; Brzozowski, 2009). An obvious limitation in that case is it excludes populations without access to such restricted networks. Moreover, workers may not disclose true feelings about their jobs on such sites, since their employ1044 ers can easily monitor them. On the other hand, we show that on Twitter, it is quite common for tweets to relate negative feelings about work (“I don’t wanna go to work today”), unprofessional behavior (“Got drunk as hell last night and still made it to work”), or a desire to work elsewhere (“I want to go work at Disney World so bad”). Nonetheless, these studies inform our work. DeChoudhury et al. (2013) investigated the landscape of emotional expression of the employees via enterprise internal microblogging. Yardi et al. (2008) examined temporal aspects of blogging usage within corporate internal blogging community. Kolari et al. (2007) characterized comprehensively how behaviors expressed in posts impact a company’s internal social networks. Brzozowski (2009) described a tool that aggregated shared internal social media which when combined with its enterprise directory added understanding the organization and employees connections. From a theoretical perspective, the Job Demands-Resources Model (Schaufeli and Bakker, 2004) suggests that job demands (e.g., overworked, dissonance, and conflict) lead to burnout and disengagement while resources (e.g., appreciation, cohesion, and safety) often result in satisfaction and productivity. Although burnout and engagement have an inverse relationship, these states fluctuate and can vary over time. In 2014, more than two-thirds of U.S. workers were disengaged at work (Gallup, 2015a) and this disconnection costs the U.S. up to $398 billion annually in lost work and medical treatment (Gallup, 2015b). Indeed, job dissatisfaction poses serious health risks and has even been linked to suicide (Hazards Magazine, 2014). Thus, examining social media for job-related messages provides a novel opportunity to study job discourse and associated demands and resources. Moreover, the declarative and affective tone of these tweets may have important implications for understanding the relationship between burnout and engagement with such public health concerns as mental health. 3 Humans-in-the-Loop Classification From July 2013 to June 2014 we collected over 7M geo-tagged tweets from around 85,000 public accounts in a 15-county around a midsized city using DataSift1. We removed punctuation and special characters, and used the Internet Slang Dictio1http://datasift.com/ nary2 to normalize nonstandard terms. Figure 1 shows our humans-in-the-loop framework for learning classifiers to identify job-related posts. It consists of four rounds of machine classification – similar to that of Li et al. (2014) except that our rounds are not as uniform – where the classifier in each round acts as a filter on our training data, providing human annotators a sample of Twitter data to label and (except for the final round) using these labeled data to train the classifiers in later rounds. Figure 1: Flowchart of our humans-in-the-loop framework, laid out in Section 3. The initial classifier C0 is a simple termmatching filter; see Table 1 (number options were considered for some terms). The other classifiers (C1, C2, C3) are SVMs that use a feature space of n-grams from the training set. Include job, jobless, manager, boss my/your/his/her/their/at work Exclude school, class, homework, student, course finals, good/nice/great job, boss ass3 Table 1: C0 rules identifying Job-Likely tweets. Round 1. We ran C0 on our dataset. Approximately 40K tweets having at least five tokens passed this filter. We call them Job-Likely tweets. We randomly chose around 2,000 JobLikely tweets and split them evenly into 50 AMT Human Intelligence Tasks (HITs), and further randomly duplicated five tweets in each HIT to evaluate each worker’s consistency. Five crowdworkers assigned to each HIT4 answered, for each tweet, 2http://www.noslang.com/dictionary 3Describe something awesome in a sense of utter dominance, magical superiority, or being ridiculously good. 4This is based on empirical insights for crowdsourced annotation tasks (Callison-Burch, 2009; Evanini et al., 2010). 1045 the question: Is this tweet about job or employment? All crowdworkers lived in the U.S. and had an approval rating of 90% or better. They were paid $1.00 per HIT5. We assessed inter-annotator reliability among the five annotators in each HIT using Geertzen’s tool (Geertzen, 2016). This yielded 1,297 tweets where all 5 annotators agreed on the same label (Table 2). To balance our training data, we added 757 tweets chosen randomly from tweets outside the Job-Likely set that we labeled not job-related. C1 trained on this set. Round 2. Our goal was to collect 4,000 more labeled tweets that, when combined with the Round 1 training data, would yield a class-balanced set. Using C1 to perform regression, we ranked the tweets in our dataset by the confidence score (Chang and Lin, 2011). We then spot-checked the tweets to estimate the frequency of job-related tweets as the confidence score increases. We discovered that among the top-ranked tweets about half, and near the separating hyperplane (i.e., where the confidence scores are near zero) almost none, are job-related. Based on these estimates, we randomly sampled 2,400 tweets from those in the top 80th percentile of confidence scores (Type-1). We then randomly sampled about 800 tweets each from the first deciles of tweets greater and lesser than zero, respectively (Type-2). The rationale for drawing from these two groups was that the false Type-1 tweets represent those on which the C1 classifier most egregiously fails, and the Type-2 tweets are those closest to the feature vectors and those toward which the classifier is most sensitive. Crowdworkers again annotated these tweets in the same fashion as in Round 1 (see Table 3), and cross-round comparisons are in Tables 2 and 4. We trained C2 on all tweets from Round 1 and 2 with unanimous labels (bold in Table 2). AMTs job-related not job-related 3 4 5 3 4 5 Round 1 104 389 1027 78 116 270 Round 2 140 287 721 66 216 2568 Table 2: Summary of both annotation rounds. 5We consulted with Turker Nation (http://www. turkernation.com) to ensure that the workers were treated and compensated fairly for their tasks. We also rewarded annotators based on the qualities of their work. Round 2 job-related not job-related 3 4 5 3 4 5 Type-1 129 280 713 50 149 1079 Type-2 11 7 8 16 67 1489 Table 3: Summary of tweet labels in Round 2 by confidence type (showing when 3/4/5 of 5 annotators agreed). AMTs Fleiss’ kappa Krippendorf’s alpha Round 1 0.62 ± 0.14 0.62 ± 0.14 Round 2 0.81 ± 0.09 0.81 ± 0.08 Table 4: Average ± stdev agreement from Round 1 and 2 are Good, Very Good (Altman, 1991). Annotations Sample Tweet Y Y Y Y Y Really bored....., no entertainment at work today Y Y Y Y N two more days of work then I finally get a day off. Y Y Y N N Leaving work at 430 and driving in this snow is going to be the death of me Y Y N N N Being a mommy is the hardest but most rewarding job a women can have #babyBliss #babybliss Y N N N N These refs need to DO THEIR FUCKING JOBS N N N N N One of the best Friday nights I’ve had in a while Table 5: Inter-annotator agreement combinations with sample tweets. Y denotes job-related. Cases where the majority (not all) annotators agreed (3/4 out of 5) are underlined in bold. Round 3. Two coauthors with prior experience from the local community reviewed instances from Round 1 and 2 on which crowdworkers disagreed (highlighted in Table 5) and provided labels. Cohen’s kappa agreement was high: κ = 0.80. Combined with all labeled data from the previous rounds this yielded 2,670 goldstandard-labeled job-related and 3,250 not jobrelated tweets. We trained C3 on this entire set. Since it is not strictly class-balanced, we gridsearched on a range of class weights and chose the estimator that optimized F1 score, using 10fold cross validation6. Table 6 shows C3’s topweighted features, which reflect the semantic field of work for the job-related class. 6These scores were determined respectively using the mean score over the cross-validation folds. The parameter settings that gave the best results on the left out data were a linear kernel with penalty parameter C = 0.1 and class weight ratio of 1:1. 1046 job-related weights not job-related weights work 2.026 did -0.714 job 1.930 amazing -0.613 manager 1.714 nut -0.600 jobs 1.633 hard -0.571 managers 1.190 constr -0.470 working 0.827 phone -0.403 bosses 0.500 doing -0.403 lovemyjob 0.500 since -0.373 shift 0.487 brdg -0.363 worked 0.410 play -0.348 paid 0.374 its -0.337 worries 0.369 think -0.330 boss 0.369 thru -0.329 seriously 0.368 hand -0.321 money 0.319 awesome -0.319 Table 6: Top 15 features for both classes of C3. Discovering Businesses. Manual examination of job-related tweets revealed patterns like: Panera Bread: Baker – Night (#LOCATION) http://URL #Hospitality #VeteranJob #Job #Jobs #TweetMyJobs. Nearly all tweets that contained at least one of these hashtags: #veteranjob, #job, #jobs, #tweetmyjobs, #hiring, #retail, #realestate, #hr also included a URL, which spot-checking revealed nearly always led to a recruitment website (see Table 7). This led to an effective heuristic to separate individual from business accounts only for posts that have first been classified as jobrelated: if an account had more job-related tweets with any of the above hashtags + URL patterns, we labeled it business; otherwise individual. hashtag only hashtag + URL #veteranjob 18,066 18,066 #job 79,362 79,327 #jobs 58,637 58,631 #tweetmyjobs 39,007 39,007 #hiring 148 147 #retail 17,037 17,035 #realestate 92 92 #hr 400 399 Table 7: Counts of hashtags queried, and counts of their subsets with hashtags coupled with URL. 4 Results and Discussion Crowdsourced Validation The fundamental difficulty in open-domain classification problems such as this one is there is no gold-standard data to hold out at the beginning of the process. To address this, we adopted a post-hoc evaluation where we took balanced sets of labeled tweets from each classifier (C0, C1, C2 and C3) and asked AMT workers to label a total of 1,600 samples, taking the majority votes (where at least 3 out of 5 crowdworkers agreed) as reference labels. Our results (Table 8) show that C3 performs the best, and significantly better than C0 and C1. Estimating Effective Recall The two machinelabeled classes in our test data are roughly balanced, which is not the case in real-world scenarios. We estimated the effective recall under the assumption that the error rates in our test samples are representative of the entire dataset. Let y be the total number of the classifier-labeled “positive” elements in the entire dataset and n be the total of “negative” elements. Let yt be the number of classifier-labeled “positive” tweets in our 1, 600samples test set and let nt = 1, 600 −yt. Then the estimated effective recall ˆR = y·nt·R y·nt·R+n·yt·(1−R). Model Class P R ˆR F1 C0 job 0.72 0.33 0.01 0.45 notjob 0.68 0.92 1.00 0.78 C1 job 0.79 0.82 0.15 0.80 notjob 0.88 0.86 0.99 0.87 C2 job 0.82 0.95 0.41 0.88 notjob 0.97 0.86 0.99 0.91 C3 job 0.83 0.96 0.45 0.89 notjob 0.97 0.87 0.99 0.92 Table 8: Crowdsourced validations of instances identified by 4 distinct models (1,600 total tweets). Assessing Business Classifier For Table 8’s tweets labeled by C0 – C3 as job-related, we asked AMT workers: Is this tweet more likely from a personal or business account? Table 9 shows that this method was quite accurate. From Class P R F1 C0 individual 0.86 1.00 0.92 business 0.00 0.00 0.00 avg/total 0.74 0.86 0.79 C1 individual 1.00 0.97 0.98 business 0.98 1.00 0.99 avg/total 0.99 0.99 0.99 C2 individual 1.00 0.98 0.99 business 0.98 1.00 0.99 avg/total 0.99 0.99 0.99 C3 individual 1.00 0.99 0.99 business 0.99 1.00 0.99 avg/total 0.99 0.99 0.99 Table 9: Crowdsourced validations of individuals vs. businesses job-related tweets. Our explanation for the strong performance of the business classifier is that the class of jobrelated tweets is relatively rare, and so by applying the classifier only to job-related tweets we sim1047 plify the individual-or-business problem dramatically. Another, perhaps equally effective, simplification is that our tweets are geo-specific and so we automatically filter out business tweets from, e.g., national media. Generalizability Tests Can our best model C3 discover job-related tweets from other geographical regions, even though it was trained on data from one specific region? We repeated the tests above on 400 geo-tagged tweets from Detroit (balanced between job-related and not). Table 10 shows that C3 and the business classifier generalize well to another region. This suggests the transferability of our humans-in-the-loop classification framework and of heuristic to separate individual from business accounts for tweets classified as job-related. Model Class P R F1 C3 job 0.85 0.99 0.92 notjob 0.99 0.87 0.93 Heuristic individual 1.00 0.96 0.98 business 0.96 1.00 0.98 avg/total 0.98 0.98 0.98 Table 10: Validations of C3 and business classifier on Detroit data. 5 Understanding Job-Related Discourse Using the job-related tweets – from both individual and business accounts – extracted by C3 from the July 2013-June 2014 dataset (see Table 12), we conducted the following analyses. C3 Versus C0 The fact that C3 outperforms C0 demonstrates our humans-in-the-loop framework is necessary and effective compared to an intuitive term-matching filter. We further examined the messages labeled as job-related by C3, but not captured by C0. More than 160,000 tweets fell into this Difference set, in which approximately 85,000 tweets are from individual accounts while the rest are from business accounts. Table 11 shows the top 3 most frequent uni-, bi-, and trigrams in the Difference dataset. These n-grams from the individual group suggest that people often talk about job-related topics while mentioning temporal information or announcing their working schedules. We neglected such time-related phrases when defining C0. In contrast, the frequencies of the listed n-grams in the business group are much higher than those in the individual group. This indicates that our definitions of inclusion terms in C0 did not capture a considerable amount of posts involving broad job-related topics, which is also reflected in Table 9: our business classifier did not find business accounts from the job-related tweets extracted by C0. Individual Business Unigrams day, 6989 ny, 83907 today, 5370 #job, 75178 good, 4245 #jobs, 55105 Bigrams last night, 359 #jobs #tweetmyjobs, 32165 getting ready, 354 #rochester ny, 22101 first day, 296 #job #jobs, 16923 Trigrams working hour shift, 51 #job #jobs #tweetmyjobs, 12004 first day back, 48 ny #jobs #tweetmyjobs, 4955 separate leader follower, 44 ny #retail #job, 4704 Table 11: Top 3 most frequent uni-, bi-, and trigrams with frequencies in the Difference set. Hashtags Individuals posted 11,935 unique hashtags and businesses only 414. The top 250 hashtags from each group are shown in Figure 2. Figure 2: Hashtags in job-related tweets: above – individual accounts; below – business accounts. Individual users used an abbreviation for the name of the midsized city to mark their location, 1048 and fml7 to express personal embarrassing stories. Work and job are self-explanatory. Money, motivation relates to jobs. Tired, exhausted, fuck, insomnia, bored, struggle express negative conditions. Likewise, lovemyjob, happy, awesome, excited, yay, tgif 8 convey positive affects experienced from jobs. Business accounts exhibit distinct patterns. Besides the hashtags queried (Table 7), we saw local place names, like corning, rochester, batavia, pittsford, and regional ones like syracuse, ithaca. Customerservice, nursing, accounting, engineering, hospitality, construction record occupations, while kellyjobs, familydollar, cintasjobs, cfgjobs, searsjobs point to business agents. Unlike individual users, businesses do not use hashtags reflecting affective expressions. Linguistic Differences We used the TweetNLP POS tagger (Gimpel et al., 2011). Figure 3 shows nine part-of-speech tag9 frequencies for three subsets of tweets. Figure 3: POS tag comparisons (normalized, averaged) among three subsets of tweets: job-related tweets from individual accounts (red), job-related tweets from business accounts (blue) and not jobrelated tweets (black). Business accounts use NNPs more than individuals, perhaps because they often advertise job openings at specific locations, like New York, Sears. Individuals use NNPs less frequently and in a more casual way, e.g., Jojo, galactica, Valli. Also, individuals use JJ, NN, NNS, PRP, PRP$, RB, UH, and VB more regularly than business ac7An acronym for Fuck My Life. 8An acronym for Thank God It’s Friday to express the joy one feels in knowing that the work week has officially ended and that one has two days off which to enjoy. 9JJ – Adjective; NN – Noun (singular or mass); NNS – Noun (plural); NNP – Proper noun (singular); PRP – Personal pronoun; PRP$ – Possessive pronoun; RB – Adverb; UH – Interjection; VB – Verb (base form) (Santorini, 1990). counts do. Not job-related tweets have similar patterns to job-related ones from individual accounts, suggesting that individual users exhibit analogous language habits regardless of topic. Temporal Patterns Our findings that individual users frequently used time-related n-grams (Table 11) prompted us to examine the temporal patterns of job discourse. Figure 4a suggests that individuals talk about jobs the most in December and January (which also have the most tweets over other topics), and the least in the warmer months. July witnesses the busiest job-related tweeting from business and January the least. The user community is slightly less active in the warmer months, with fewer tweets then. Figure 4b shows that job-related tweet volumes are higher on weekdays and lower on weekends, following the standard work week. Weekends see fewer business tweets than weekdays do. Sunday is the most – while Friday and Saturday are the least – active days from the not job-related perspective. Figure 4c shows hourly trends. Job-related tweets from business accounts are most frequent during business hours, peaking at 11, and then taper off. Perhaps professionals are either getting their commercial tasks completed before lunch, or expecting others to check updates during lunch. Individuals post about jobs almost anytime awake and have a similar distribution to non-job-related tweets. Measuring Affective Changes We examined positive affect (PA) and negative affect (NA) to measure diurnal changes in public mood (Figures 5 and 6), using two recognized lexicons, in jobrelated tweets from individual accounts (left), jobrelated tweets from business accounts (middle), and not job-related tweets (right). (1) Linguistic Inquiry and Word Count We used LIWC’s positive emotion and negative emotion to represent PA and NA respectively (Pennebaker et al., 2001) because it is common in behavioral health studies, and used as a standard comparison in referenced work. Figure 5 shows the mean daily trends of PA and NA.10 Panels 5a and 5b reveal contrasting job-related affective patterns, compared to prior trends from 10Non-equal y-axes help show peak/valley patterns here and in Figure 6, also motivated by lexicon’s unequal sizes. 1049 (a) In each month (b) On each day of week (c) In each hour Figure 4: Distributions of job-related tweets over time by job class. We converted timestamps from the Coordinated Universal Time standard (UTC) to local time zone with daylight saving time taken into account. enterprise-wide micro-blog usage (De Choudhury and Counts, 2013), i.e., public social media exhibit gradual increase in PA while internal enterprise network decrease after business. This perhaps confirms our suspicion that people talk about work on public social media differently than on work-based media. (2) Word-Emotion Association Lexicon We focused on the words from EmoLex’s positive and negative categories, which represent sentiment polarities (Mohammad and Turney, 2013; Mohammad and Turney, 2010) and calculated the score for each tweet similarly as LIWC. The average daily positive and negative sentiment scores in Figure 6 display patterns analogous to Figure 5. Labor Statistics We explored associations between Twitter temporal patterns, affect, and official labor statistics (Figure 8). These monthly statistics11 include: labor force, employment, unemployment, and unemployment rate. We collected one more year of Twitter data from the same area, and applied C3 to extract the jobrelated posts from individual and business accounts (Table 12 summarizes the basic statistics), then defined the following monthwise statistics for our two-year dataset: count of overall/jobindividual/job-business/others tweets; percentage of job-individual/job-business/others tweets in overall tweets; average LIWC PA/NA scores of job-individual/job-business/others tweets12. Positive affect expressed in job-related discourse from both individual and business accounts correlate negatively with unemployment and un11Published by US Department of Labor, including: Local Area Unemployment Statistics; State and Metro Area Employment, Hours, and Earnings. 12IND: individual; BIZ: business; pct: %; avg: average. employment rate. This is intuitive, as unemployment is generally believed to have a negative impact on individuals’ lives. The counts of jobrelated tweets from individual and not job-related tweets are both positively correlated with unemployment and unemployment rate, suggesting that unemployment may lead to more activities in public social media. This correlation result shows that online textual disclosure themes and behaviors can reflect institutional survey data. Inside vs. Outside City We compared tweets occurring within the city boundary to those lying outside (Table 13). The percentages of job-related tweets from individual accounts, either in urban or rural areas, remain relatively even. The proportion of job-related tweets from business accounts decreased sharply from urban to rural locations. This may be because business districts are usually centered in urban areas and individual tweets reflect more complex geospatial distributions. Job-Life Cycle Model Based on hand inspection of a large number of job-related tweets and on models of the relationship between work and wellness found in behavioral studies (Archambault and Grudin, 2012; Schaufeli and Bakker, 2004), we tentatively propose a job-life model for jobrelated discourse from individual accounts (Figure 7). Each state in the model has three dimensions: the point of view, the affect, and the job-related activity, in terms of basic level of employment, expressed in the tweet. We concatenated together all job-related tweets posted by each individual into a single document and performed latent Dirichlet allocation (LDA) (Blei et al., 2003) on this user-level corpus, using Gensim ( ˇReh˚uˇrek and Sojka, 2010). We used 12 1050 (a) Job-related tweets from individuals (b) Job-related tweets from businesses (c) Not job-related tweets Figure 5: Diurnal trends of positive and negative affect based on LIWC. (a) Job-related tweets from individuals (b) Job-related tweets from businesses (c) Not job-related tweets Figure 6: Diurnal trends of positive and negative affect based on EmoLex. Unique counts job-related tweets from individual accounts job-related tweets from business accounts not job-related tweets tweets accounts tweets accounts tweets accounts July 2013 - June 2014 114,302 17,183 79,721 292 6,912,306 84,718 July 2014 - June 2015 85,851 16,350 115,302 333 5,486,943 98,716 Total (unique counts) 200,153 28,161 195,023 431 12,399,249 136,703 Table 12: Summary statistics of the two-year Twitter data classified by C3. Figure 7: The job-life model captures the point of view, affect, and job-related activity in tweets. % job-related individual job-related business others Inside 1.59 3.73 94.68 Outside 1.85 1.51 96.65 Combined 1.82 1.77 96.41 Table 13: Percent inside and outside city tweets. topics for the LDA based on the number of affect classes (three) times the number of job-related activities (four). See Table 14. Topic 0 appears to be about getting ready to start a job, and topic 1 about leaving work permanently or temporarily. Topics 2, 5, 6, 8, and 11 suggest how key affect is for understanding job1051 Figure 8: Correlation matrix with Spearman used for test at level .05, with insignificant coefficients left blank. The matrix is ordered by a hierarchical clustering algorithm. Blue – positive correlation, red – negative correlation. Topic index Representative words 0 getting, ready, day, first, hopefully 1 last, finally, week, break, last day 2 fucking, hate, seriously, lol, really 3 come visit, some, talking, pissed 4 weekend, today, home, thank god 5 wish, love, better, money, working 6 shift, morning, leave, shit, bored 7 manager, guy, girl, watch, keep 8 feel, sure, supposed, help, miss 9 much, early, long, coffee, care 10 time, still, hour, interview, since 11 best, pay, bored, suck, proud Table 14: The top five words in each of the twelve topics discovered by LDA. related discourse: 2 and 6 lean towards dissatisfaction and 5 toward satisfaction. 11 looks like a mixture. Topic 7 connects to coworkers. Many topics point to the importance of time (including leisure time in topic 4). 6 Conclusion We used crowdsourcing and local expertise to power a humans-in-the-loop classification framework that iteratively improves identification of public job-related tweets. We separated business accounts from individual in job-related discourse. We also analyzed identified tweets integrating temporal, affective, geospatial, and statistical information. While jobs take up enormous amounts of most adults’ time, job-related tweets are still rather infrequent. Examining affective changes reveals that PA and NA change independently; low NA appears to indicate the absence of negative feelings, not the presence of positive ones. Our work is of social importance to workingage adults, especially for those who may struggle with job-related issues. Besides providing insights for discourse and its links to social science, our study could lead to practical applications, such as: aiding policy-makers with macro-level insights on job markets, connecting job-support resources to those in need, and facilitating the development of job recommendation systems. This work has limitations. We did not study whether providing contextual information in our humans-in-the-loop framework would influence the model performance. This is left for future work. Additionally we recognize that the hashtag inventory used to discover business accounts from job-related topics might need to change over time, to achieve robust performance in the future. As another point, due to Twitter demographics, we are less likely to observe working seniors. Acknowledgments We thank the anonymous reviewers for their helpful comments and suggestions. This work was supported in part by a GCCIS Kodak Endowed Chair Fund Health Information Technology Strategic Initiative Grant and NSF Award #SES-1111016. References DG Altman. 1991. Inter-Rater Agreement. Practical Statistics for Medical Research, 5:403–409. Anne Archambault and Jonathan Grudin. 2012. A Longitudinal Study of Facebook, LinkedIn, & Twitter Use. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 2741–2750. ACM. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent Dirichlet Allocation. the Journal of Machine Learning Research, 3:993–1022. Michael J Brzozowski. 2009. Watercooler: Exploring an Organization through Enterprise Social Media. In Proceedings of the ACM 2009 International Conference on Supporting Group Work, pages 219– 228. ACM. Chris Callison-Burch. 2009. Fast, Cheap, and Creative: Evaluating Translation Quality Using Amazon’s Mechanical Turk. In Proceedings of the 2009 1052 Conference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1, pages 286– 295. Association for Computational Linguistics. Chih-Chung Chang and Chih-Jen Lin. 2011. LIBSVM: A Library for Support Vector Machines. ACM Transactions on Intelligent Systems and Technology (TIST), 2(3):27. Munmun De Choudhury and Scott Counts. 2013. Understanding Affect in the Workplace via Social Media. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work, pages 303–316. ACM. Keelan Evanini, Derrick Higgins, and Klaus Zechner. 2010. Using Amazon Mechanical Turk for Transcription of Non-Native Speech. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, pages 53–56. Association for Computational Linguistics. Gallup. 2015a. Majority of U.S. Employees not Engaged despite Gains in 2014. Gallup. 2015b. Only 35% of U.S. Managers are Engaged in their Jobs. Jeroen Geertzen. 2016. Inter-Rater Agreement with Multiple Raters and Variables. [Online; accessed 17-February-2016]. Kevin Gimpel, Nathan Schneider, Brendan O’Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A Smith. 2011. Part-of-Speech Tagging for Twitter: Annotation, Features, and Experiments. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papersVolume 2, pages 42–47. Association for Computational Linguistics. Hazards Magazine. 2014. Work Suicide. Pranam Kolari, Tim Finin, Kelly Lyons, Yelena Yesha, Yaacov Yesha, Stephen Perelgut, and Jen Hawkins. 2007. On the Structure, Properties and Utility of Internal Corporate Blogs. Growth, 45000:50000. Jiwei Li, Alan Ritter, Claire Cardie, and Eduard H Hovy. 2014. Major Life Event Extraction from Twitter based on Congratulations/Condolences Speech Acts. In EMNLP, pages 1997–2007. Saif M Mohammad and Peter D Turney. 2010. Emotions Evoked by Common Words and Phrases: Using Mechanical Turk to Create An Emotion Lexicon. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text, pages 26–34. Association for Computational Linguistics. Saif M. Mohammad and Peter D. Turney. 2013. Crowdsourcing a Word-Emotion Association Lexicon. 29(3):436–465. James W Pennebaker, Martha E Francis, and Roger J Booth. 2001. Linguistic Inquiry and Word Count: LIWC 2001. Mahway: Lawrence Erlbaum Associates, 71:2001. Radim ˇReh˚uˇrek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45– 50, Valletta, Malta, May. ELRA. Beatrice Santorini. 1990. Part-of-Speech Tagging Guidelines for the Penn Treebank Project (3rd Revision). Wilmar B Schaufeli and Arnold B Bakker. 2004. Job Demands, Job Resources, and Their Relationship with Burnout and Engagement: A MultiSample Study. Journal of Organizational Behavior, 25(3):293–315. Sarita Yardi, Scott Golder, and Mike Brzozowski. 2008. The Pulse of the Corporate Blogosphere. In Conf. Supplement of CSCW 2008, pages 8–12. Citeseer. 1053
2016
99
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1–10 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1001 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1–10 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1001 Adversarial Multi-task Learning for Text Classification Pengfei Liu Xipeng Qiu Xuanjing Huang Shanghai Key Laboratory of Intelligent Information Processing, Fudan University School of Computer Science, Fudan University 825 Zhangheng Road, Shanghai, China {pfliu14,xpqiu,xjhuang}@fudan.edu.cn Abstract Neural network models have shown their promising opportunities for multi-task learning, which focus on learning the shared layers to extract the common and task-invariant features. However, in most existing approaches, the extracted shared features are prone to be contaminated by task-specific features or the noise brought by other tasks. In this paper, we propose an adversarial multi-task learning framework, alleviating the shared and private latent feature spaces from interfering with each other. We conduct extensive experiments on 16 different text classification tasks, which demonstrates the benefits of our approach. Besides, we show that the shared knowledge learned by our proposed model can be regarded as off-the-shelf knowledge and easily transferred to new tasks. The datasets of all 16 tasks are publicly available at http://nlp.fudan. edu.cn/data/ 1 Introduction Multi-task learning is an effective approach to improve the performance of a single task with the help of other related tasks. Recently, neuralbased models for multi-task learning have become very popular, ranging from computer vision (Misra et al., 2016; Zhang et al., 2014) to natural language processing (Collobert and Weston, 2008; Luong et al., 2015), since they provide a convenient way of combining information from multiple tasks. However, most existing work on multi-task learning (Liu et al., 2016c,b) attempts to divide the features of different tasks into private and shared spaces, merely based on whether parameters of A B (a) Shared-Private Model A B (b) Adversarial Shared-Private Model Figure 1: Two sharing schemes for task A and task B. The overlap between two black circles denotes shared space. The blue triangles and boxes represent the task-specific features while the red circles denote the features which can be shared. some components should be shared. As shown in Figure 1-(a), the general shared-private model introduces two feature spaces for any task: one is used to store task-dependent features, the other is used to capture shared features. The major limitation of this framework is that the shared feature space could contain some unnecessary taskspecific features, while some sharable features could also be mixed in private space, suffering from feature redundancy. Taking the following two sentences as examples, which are extracted from two different sentiment classification tasks: Movie reviews and Baby products reviews. The infantile cart is simple and easy to use. This kind of humour is infantile and boring. The word “infantile” indicates negative sentiment in Movie task while it is neutral in Baby task. However, the general shared-private model could place the task-specific word “infantile” in a shared space, leaving potential hazards for other tasks. Additionally, the capacity of shared space could also be wasted by some unnecessary features. To address this problem, in this paper we propose an adversarial multi-task framework, in which the shared and private feature spaces are in1 herently disjoint by introducing orthogonality constraints. Specifically, we design a generic sharedprivate learning framework to model the text sequence. To prevent the shared and private latent feature spaces from interfering with each other, we introduce two strategies: adversarial training and orthogonality constraints. The adversarial training is used to ensure that the shared feature space simply contains common and task-invariant information, while the orthogonality constraint is used to eliminate redundant features from the private and shared spaces. The contributions of this paper can be summarized as follows. 1. Proposed model divides the task-specific and shared space in a more precise way, rather than roughly sharing parameters. 2. We extend the original binary adversarial training to multi-class, which not only enables multiple tasks to be jointly trained, but allows us to utilize unlabeled data. 3. We can condense the shared knowledge among multiple tasks into an off-the-shelf neural layer, which can be easily transferred to new tasks. 2 Recurrent Models for Text Classification There are many neural sentence models, which can be used for text modelling, involving recurrent neural networks (Sutskever et al., 2014; Chung et al., 2014; Liu et al., 2015a), convolutional neural networks (Collobert et al., 2011; Kalchbrenner et al., 2014), and recursive neural networks (Socher et al., 2013). Here we adopt recurrent neural network with long short-term memory (LSTM) due to their superior performance in various NLP tasks (Liu et al., 2016a; Lin et al., 2017). Long Short-term Memory Long short-term memory network (LSTM) (Hochreiter and Schmidhuber, 1997) is a type of recurrent neural network (RNN) (Elman, 1990), and specifically addresses the issue of learning long-term dependencies. While there are numerous LSTM variants, here we use the LSTM architecture used by (Jozefowicz et al., 2015), which is similar to the architecture of (Graves, 2013) but without peep-hole connections. We define the LSTM units at each time step t to be a collection of vectors in Rd: an input gate it, a forget gate ft, an output gate ot, a memory cell ct and a hidden state ht. d is the number of the LSTM units. The elements of the gating vectors it, ft and ot are in [0, 1]. The LSTM is precisely specified as follows.   ˜ct ot it ft  =   tanh σ σ σ    Wp  xt ht−1  + bp  , (1) ct = ˜ct ⊙it + ct−1 ⊙ft, (2) ht = ot ⊙tanh (ct) , (3) where xt ∈Re is the input at the current time step; Wp ∈R4d×(d+e) and bp ∈R4d are parameters of affine transformation; σ denotes the logistic sigmoid function and ⊙denotes elementwise multiplication. The update of each LSTM unit can be written precisely as follows: ht = LSTM(ht−1, xt, θp). (4) Here, the function LSTM(·, ·, ·, ·) is a shorthand for Eq. (1-3), and θp represents all the parameters of LSTM. Text Classification with LSTM Given a text sequence x = {x1, x2, · · · , xT }, we first use a lookup layer to get the vector representation (embeddings) xi of the each word xi. The output at the last moment hT can be regarded as the representation of the whole sequence, which has a fully connected layer followed by a softmax non-linear layer that predicts the probability distribution over classes. ˆy = softmax(WhT + b) (5) where ˆy is prediction probabilities, W is the weight which needs to be learned, b is a bias term. Given a corpus with N training samples (xi, yi), the parameters of the network are trained to minimise the cross-entropy of the predicted and true distributions. L(ˆy, y) = − N X i=1 C X j=1 yj i log(ˆyj i ), (6) where yj i is the ground-truth label; ˆyj i is prediction probabilities, and C is the class number. 2 softmax Lm task LSTM softmax Ln task xm xn (a) Fully Shared Model (FS-MTL) xm xn LSTM LSTM LSTM softmax softmax Lm task Ln task (b) Shared-Private Model (SP-MTL) Figure 2: Two architectures for learning multiple tasks. Yellow and gray boxes represent shared and private LSTM layers respectively. 3 Multi-task Learning for Text Classification The goal of multi-task learning is to utilizes the correlation among these related tasks to improve classification by learning tasks in parallel. To facilitate this, we give some explanation for notations used in this paper. Formally, we refer to Dk as a dataset with Nk samples for task k. Specifically, Dk = {(xk i , yk i )}Nk i=1 (7) where xk i and yk i denote a sentence and corresponding label for task k. 3.1 Two Sharing Schemes for Sentence Modeling The key factor of multi-task learning is the sharing scheme in latent feature space. In neural network based model, the latent features can be regarded as the states of hidden neurons. Specific to text classification, the latent features are the hidden states of LSTM at the end of a sentence. Therefore, the sharing schemes are different in how to group the shared features. Here, we first introduce two sharing schemes with multi-task learning: fully-shared scheme and shared-private scheme. Fully-Shared Model (FS-MTL) In fully-shared model, we use a single shared LSTM layer to extract features for all the tasks. For example, given two tasks m and n, it takes the view that the features of task m can be totally shared by task n and vice versa. This model ignores the fact that some features are task-dependent. Figure 2a illustrates the fully-shared model. Shared-Private Model (SP-MTL) As shown in Figure 2b, the shared-private model introduces two feature spaces for each task: one is used to store task-dependent features, the other is used to capture task-invariant features. Accordingly, we can see each task is assigned a private LSTM layer and shared LSTM layer. Formally, for any sentence in task k, we can compute its shared representation sk t and task-specific representation hk t as follows: sk t = LSTM(xt, sk t−1, θs), (8) hk t = LSTM(xt, hm t−1, θk) (9) where LSTM(., θ) is defined as Eq. (4). The final features are concatenation of the features from private space and shared space. 3.2 Task-Specific Output Layer For a sentence in task k, its feature h(k), emitted by the deep muti-task architectures, is ultimately fed into the corresponding task-specific softmax layer for classification or other tasks. The parameters of the network are trained to minimise the cross-entropy of the predicted and true distributions on all the tasks. The loss Ltask can be computed as: LTask = K X k=1 αkL(ˆy(k), y(k)) (10) where αk is the weights for each task k respectively. L(ˆy, y) is defined as Eq. 6. 4 Incorporating Adversarial Training Although the shared-private model separates the feature space into the shared and private spaces, there is no guarantee that sharable features can not exist in private feature space, or vice versa. Thus, some useful sharable features could be ignored in shared-private model, and the shared feature space is also vulnerable to contamination by some taskspecific information. Therefore, a simple principle can be applied into multi-task learning that a good shared feature space should contain more common information and no task-specific information. To address this problem, we introduce adversarial training into multi-task framework as shown in Figure 3 (ASPMTL). 3 xm xn LSTM LSTM LSTM LDiff LAdv LDiff softmax softmax Lm task Ln task Figure 3: Adversarial shared-private model. Yellow and gray boxes represent shared and private LSTM layers respectively. 4.1 Adversarial Network Adversarial networks have recently surfaced and are first used for generative model (Goodfellow et al., 2014). The goal is to learn a generative distribution pG(x) that matches the real data distribution Pdata(x) Specifically, GAN learns a generative network G and discriminative model D, in which G generates samples from the generator distribution pG(x). and D learns to determine whether a sample is from pG(x) or Pdata(x). This min-max game can be optimized by the following risk: φ = min G max D  Ex∼Pdata[log D(x)] + Ez∼p(z)[log(1 −D(G(z)))]  (11) While originally proposed for generating random samples, adversarial network can be used as a general tool to measure equivalence between distributions (Taigman et al., 2016). Formally, (Ajakan et al., 2014) linked the adversarial loss to the H-divergence between two distributions and successfully achieve unsupervised domain adaptation with adversarial network. Motivated by theory on domain adaptation (Ben-David et al., 2010, 2007; Bousmalis et al., 2016) that a transferable feature is one for which an algorithm cannot learn to identify the domain of origin of the input observation. 4.2 Task Adversarial Loss for MTL Inspired by adversarial networks (Goodfellow et al., 2014), we proposed an adversarial sharedprivate model for multi-task learning, in which a shared recurrent neural layer is working adversarially towards a learnable multi-layer perceptron, preventing it from making an accurate prediction about the types of tasks. This adversarial training encourages shared space to be more pure and ensure the shared representation not be contaminated by task-specific features. Task Discriminator Discriminator is used to map the shared representation of sentences into a probability distribution, estimating what kinds of tasks the encoded sentence comes from. D(sk T , θD) = softmax(b + Usk T ) (12) where U ∈Rd×d is a learnable parameter and b ∈ Rd is a bias. Adversarial Loss Different with most existing multi-task learning algorithm, we add an extra task adversarial loss LAdv to prevent task-specific feature from creeping in to shared space. The task adversarial loss is used to train a model to produce shared features such that a classifier cannot reliably predict the task based on these features. The original loss of adversarial network is limited since it can only be used in binary situation. To overcome this, we extend it to multi-class form, which allow our model can be trained together with multiple tasks: LAdv = min θs λmax θD ( K X k=1 Nk X i=1 dk i log[D(E(xk))]) ! (13) where dk i denotes the ground-truth label indicating the type of the current task. Here, there is a minmax optimization and the basic idea is that, given a sentence, the shared LSTM generates a representation to mislead the task discriminator. At the same time, the discriminator tries its best to make a correct classification on the type of task. After the training phase, the shared feature extractor and task discriminator reach a point at which both cannot improve and the discriminator is unable to differentiate among all the tasks. Semi-supervised Learning Multi-task Learning We notice that the LAdv requires only the input sentence x and does not require the corresponding label y, which makes it possible to combine our model with semi-supervised learning. Finally, in this semi-supervised multi-task learning framework, our model can not only utilize the data from related tasks, but can employ abundant unlabeled corpora. 4.3 Orthogonality Constraints We notice that there is a potential drawback of the above model. That is, the task-invariant features can appear both in shared space and private space. Motivated by recently work(Jia et al., 2010; Salzmann et al., 2010; Bousmalis et al., 2016) 4 Dataset Train Dev. Test Unlab. Avg. L Vocab. Books 1400 200 400 2000 159 62K Elec. 1398 200 400 2000 101 30K DVD 1400 200 400 2000 173 69K Kitchen 1400 200 400 2000 89 28K Apparel 1400 200 400 2000 57 21K Camera 1397 200 400 2000 130 26K Health 1400 200 400 2000 81 26K Music 1400 200 400 2000 136 60K Toys 1400 200 400 2000 90 28K Video 1400 200 400 2000 156 57K Baby 1300 200 400 2000 104 26K Mag. 1370 200 400 2000 117 30K Soft. 1315 200 400 475 129 26K Sports 1400 200 400 2000 94 30K IMDB 1400 200 400 2000 269 44K MR 1400 200 400 2000 21 12K Table 1: Statistics of the 16 datasets. The columns 2-5 denote the number of samples in training, development, test and unlabeled sets. The last two columns represent the average length and vocabulary size of corresponding dataset. on shared-private latent space analysis, we introduce orthogonality constraints, which penalize redundant latent representations and encourages the shared and private extractors to encode different aspects of the inputs. After exploring many optional methods, we find below loss is optimal, which is used by Bousmalis et al. (2016) and achieve a better performance: Ldiff= K X k=1 Sk⊤Hk 2 F , (14) where ∥· ∥2 F is the squared Frobenius norm. Sk and Hk are two matrics, whose rows are the output of shared extractor Es(, ; θs) and task-specific extrator Ek(, ; θk) of a input sentence. 4.4 Put It All Together The final loss function of our model can be written as: L = LTask + λLAdv + γLDiff (15) where λ and γ are hyper-parameter. The networks are trained with backpropagation and this minimax optimization becomes possible via the use of a gradient reversal layer (Ganin and Lempitsky, 2015). 5 Experiment 5.1 Dataset To make an extensive evaluation, we collect 16 different datasets from several popular review corpora. The first 14 datasets are product reviews, which contain Amazon product reviews from different domains, such as Books, DVDs, Electronics, ect. The goal is to classify a product review as either positive or negative. These datasets are collected based on the raw data 1 provided by (Blitzer et al., 2007). Specifically, we extract the sentences and corresponding labels from the unprocessed original data 2. The only preprocessing operation of these sentences is tokenized using the Stanford tokenizer 3. The remaining two datasets are about movie reviews. The IMDB dataset4 consists of movie reviews with binary classes (Maas et al., 2011). One key aspect of this dataset is that each movie review has several sentences. The MR dataset also consists of movie reviews from rotten tomato website with two classes 5(Pang and Lee, 2005). All the datasets in each task are partitioned randomly into training set, development set and testing set with the proportion of 70%, 20% and 10% respectively. The detailed statistics about all the datasets are listed in Table 1. 5.2 Competitor Methods for Multi-task Learning The multi-task frameworks proposed by previous works are various while not all can be applied to the tasks we focused. Nevertheless, we chose two most related neural models for multi-task learning and implement them as competitor methods. • MT-CNN: This model is proposed by Collobert and Weston (2008) with convolutional layer, in which lookup-tables are shared partially while other layers are task-specific. 1https://www.cs.jhu.edu/˜mdredze/ datasets/sentiment/ 2Blitzer et al. (2007) also provides two extra processed datasets with the format of Bag-of-Words, which are not proper for neural-based models. 3http://nlp.stanford.edu/software/ tokenizer.shtml 4https://www.cs.jhu.edu/˜mdredze/ datasets/sentiment/unprocessed.tar.gz 5https://www.cs.cornell.edu/people/ pabo/movie-review-data/. 5 Task Single Task Multiple Tasks LSTM BiLSTM sLSTM Avg. MT-DNN MT-CNN FS-MTL SP-MTL ASP-MTL Books 20.5 19.0 18.0 19.2 17.8(−1.4) 15.5(−3.7) 17.5(−1.7) 18.8(−0.4) 16.0(−3.2) Electronics 19.5 21.5 23.3 21.4 18.3(−3.1) 16.8(−4.6) 14.3(−7.1) 15.3(−6.1) 13.2(−8.2) DVD 18.3 19.5 22.0 19.9 15.8(−4.1) 16.0(−3.9) 16.5(−3.4) 16.0(−3.9) 14.5(−5.4) Kitchen 22.0 18.8 19.5 20.1 19.3(−0.8) 16.8(−3.3) 14.0(−6.1) 14.8(−5.3) 13.8(−6.3) Apparel 16.8 14.0 16.3 15.7 15.0(−0.7) 16.3(+0.6) 15.5(−0.2) 13.5(−2.2) 13.0(−2.7) Camera 14.8 14.0 15.0 14.6 13.8(−0.8) 14.0(−0.6) 13.5(−1.1) 12.0(−2.6) 10.8(−3.8) Health 15.5 21.3 16.5 17.8 14.3(−3.5) 12.8(−5.0) 12.0(−5.8) 12.8(−5.0) 11.8(−6.0) Music 23.3 22.8 23.0 23.0 15.3(−7.7) 16.3(−6.7) 18.8(−4.2) 17.0(−6.0) 17.5(−5.5) Toys 16.8 15.3 16.8 16.3 12.3(−4.0) 10.8(−5.5) 15.5(−0.8) 14.8(−1.5) 12.0(−4.3) Video 18.5 16.3 16.3 17.0 15.0(−2.0) 18.5(+1.5) 16.3(−0.7) 16.8(−0.2) 15.5(−1.5) Baby 15.3 16.5 15.8 15.9 12.0(−3.9) 12.3(−3.6) 12.0(−3.9) 13.3(−2.6) 11.8(−4.1) Magazines 10.8 8.5 12.3 10.5 10.5(+0.0) 12.3(+1.8) 7.5(−3.0) 8.0(−2.5) 7.8(−2.7) Software 15.3 14.3 14.5 14.7 14.3(−0.4) 13.5(−1.2) 13.8(−0.9) 13.0(−1.7) 12.8(−1.9) Sports 18.3 16.0 17.5 17.3 16.8(−0.5) 16.0(−1.3) 14.5(−2.8) 12.8(−4.5) 14.3(−3.0) IMDB 18.3 15.0 18.5 17.3 16.8(−0.5) 13.8(−3.5) 17.5(+0.2) 15.3(−2.0) 14.5(−2.8) MR 27.3 25.3 28.0 26.9 24.5(−2.4) 25.5(−1.4) 25.3(−1.6) 24.0(−2.9) 23.3(−3.6) AVG 18.2 17.4 18.3 18.0 15.7(−2.2) 15.5(−2.5) 15.3(−2.7) 14.9(−3.1) 13.9(−4.1) Table 2: Error rates of our models on 16 datasets against typical baselines. The numbers in brackets represent the improvements relative to the average performance (Avg.) of three single task baselines. • MT-DNN: The model is proposed by Liu et al. (2015b) with bag-of-words input and multi-layer perceptrons, in which a hidden layer is shared. 5.3 Hyperparameters The word embeddings for all of the models are initialized with the 200d GloVe vectors ((Pennington et al., 2014)). The other parameters are initialized by randomly sampling from uniform distribution in [−0.1, 0.1]. The mini-batch size is set to 16. For each task, we take the hyperparameters which achieve the best performance on the development set via an small grid search over combinations of the initial learning rate [0.1, 0.01], λ ∈[0.01, 0.1], and γ ∈[0.01, 0.1]. Finally, we chose the learning rate as 0.01, λ as 0.05 and γ as 0.01. 5.4 Performance Evaluation Table 2 shows the error rates on 16 text classification tasks. The column of “Single Task” shows the results of vanilla LSTM, bidirectional LSTM (BiLSTM), stacked LSTM (sLSTM) and the average error rates of previous three models. The column of “Multiple Tasks” shows the results achieved by corresponding multi-task models. From this table, we can see that the performance of most tasks can be improved with a large margin with the help of multi-task learning, in which our model achieves the lowest error rates. More concretely, compared with SP-MTL, ASPMTL achieves 4.1% average improvement surpassing SP-MTL with 1.0%, which indicates the importance of adversarial learning. It is noteworthy that for FS-MTL, the performances of some tasks are degraded, since this model puts all private and shared information into a unified space. 5.5 Shared Knowledge Transfer With the help of adversarial learning, the shared feature extractor Es can generate more pure taskinvariant representations, which can be considered as off-the-shelf knowledge and then be used for unseen new tasks. To test the transferability of our learned shared extractor, we also design an experiment, in which we take turns choosing 15 tasks to train our model MS with multi-task learning, then the learned shared layer are transferred to a second network MT that is used for the remaining one task. The parameters of transferred layer are kept frozen, and the rest of parameters of the network MT are randomly initialized. More formally, we investigate two mechanisms towards the transferred shared extractor. As shown in Figure 4. The first one Single Channel (SC) model consists of one shared feature extractor Es from MS, then the extracted representation will be sent to an output layer. By contrast, the BiChannel (BC) model introduces an extra LSTM layer to encode more task-specific information. To evaluate the effectiveness of our introduced adversarial training framework, we also make a compar6 Source Tasks Single Task Transfer Models LSTM BiLSTM sLSTM Avg. SP-MTL-SC SP-MTL-BC ASP-MTL-SC ASP-MTL-BC φ (Books) 20.5 19.0 18.0 19.2 17.8(−1.4) 16.3(−2.9) 16.8(−2.4) 16.3(−2.9) φ (Electronics) 19.5 21.5 23.3 21.4 15.3(−6.1) 14.8(−6.6) 17.8(−3.6) 16.8(−4.6) φ (DVD) 18.3 19.5 22.0 19.9 14.8(−5.1) 15.5(−4.4) 14.5(−5.4) 14.3(−5.6) φ (Kitchen) 22.0 18.8 19.5 20.1 15.0(−5.1) 16.3(−3.8) 16.3(−3.8) 15.0(−5.1) φ (Apparel) 16.8 14.0 16.3 15.7 14.8(−0.9) 12.0(−3.7) 12.5(−3.2) 13.8(−1.9) φ (Camera) 14.8 14.0 15.0 14.6 13.3(−1.3) 12.5(−2.1) 11.8(−2.8) 10.3(−4.3) φ (Health) 15.5 21.3 16.5 17.8 14.5(−3.3) 14.3(−3.5) 12.3(−5.5) 13.5(−4.3) φ (Music) 23.3 22.8 23.0 23.0 20.0(−3.0) 17.8(−5.2) 17.5(−5.5) 18.3(−4.7) φ (Toys) 16.8 15.3 16.8 16.3 13.8(−2.5) 12.5(−3.8) 13.0(−3.3) 11.8(−4.5) φ (Video) 18.5 16.3 16.3 17.0 14.3(−2.7) 15.0(−2.0) 14.8(−2.2) 14.8(−2.2) φ (Baby) 15.3 16.5 15.8 15.9 16.5(+0.6) 16.8(+0.9) 13.5(−2.4) 12.0(−3.9) φ (Magazines) 10.8 8.5 12.3 10.5 10.5(+0.0) 10.3(−0.2) 8.8(−1.7) 9.5(−1.0) φ (Software) 15.3 14.3 14.5 14.7 13.0(−1.7) 12.8(−1.9) 14.5(−0.2) 11.8(−2.9) φ (Sports) 18.3 16.0 17.5 17.3 16.3(−1.0) 16.3(−1.0) 13.3(−4.0) 13.5(−3.8) φ (IMDB) 18.3 15.0 18.5 17.3 12.8(−4.5) 12.8(−4.5) 12.5(−4.8) 13.3(−4.0) φ (MR) 27.3 25.3 28.0 26.9 26.0(−0.9) 26.5(−0.4) 24.8(−2.1) 23.5(−3.4) AVG 18.2 17.4 18.3 18.0 15.6(−2.4) 15.2(−2.8) 14.7(−3.3) 14.3(−3.7) Table 3: Error rates of our models on 16 datasets against vanilla multi-task learning. φ (Books) means that we transfer the knowledge of the other 15 tasks to the target task Books. xt LSTM softmax Es (a) Single Channel xt LSTM LSTM softmax Es (b) Bi-Channel Figure 4: Two transfer strategies using a pretrained shared LSTM layer. Yellow box denotes shared feature extractor Es trained by 15 tasks. ison with vanilla multi-task learning method. Results and Analysis As shown in Table 3, we can see the shared layer from ASP-MTL achieves a better performance compared with SP-MTL. Besides, for the two kinds of transfer strategies, the Bi-Channel model performs better. The reason is that the task-specific layer introduced in the BiChannel model can store some private features. Overall, the results indicate that we can save the existing knowledge into a shared recurrent layer using adversarial multi-task learning, which is quite useful for a new task. 5.6 Visualization To get an intuitive understanding of how the introduced orthogonality constraints worked compared with vanilla shared-private model, we design an experiment to examine the behaviors of neurons from private layer and shared layer. More concretely, we refer to htj as the activation of the jneuron at time step t, where t ∈{1, . . . , n} and j ∈{1, . . . , d}. By visualizing the hidden state hj and analyzing the maximum activation, we can find what kinds of patterns the current neuron focuses on. Figure 5 illustrates this phenomenon. Here, we randomly sample a sentence from the validation set of Baby task and analyze the changes of the predicted sentiment score at different time steps, which are obtained by SP-MTL and our proposed model. Additionally, to get more insights into how neurons in shared layer behave diversely towards different input word, we visualize the activation of two typical neurons. For the positive sentence “Five stars, my baby can fall asleep soon in the stroller”, both models capture the informative pattern “Five stars” 6. However, SP-MTL makes a wrong prediction due to misunderstanding of the word “asleep”. By contrast, our model makes a correct prediction and the reason can be inferred from the activation of Figure 5-(b), where the shared layer of SP-MTL is so sensitive that many features related to other tasks are included, such as ”asleep”, which misleads the final prediction. This indicates the importance of introducing adversarial learning to prevent the shared layer from being contaminated by task-specific features. We also list some typical patterns captured by 6For this case, the vanilla LSTM also give a wrong answer due to ignoring the feature “Five stars”. 7 Five stars , my baby can fall asleep soon in the stroller 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 SP-MTL Ours (a) Predicted Sentiment Score by Two Models (b) Behaviours of Neuron hs 18 and hs 21 Figure 5: (a) The change of the predicted sentiment score at different time steps. Y-axis represents the sentiment score, while X-axis represents the input words in chronological order. The darker grey horizontal line gives a border between the positive and negative sentiments. (b) The purple heat map describes the behaviour of neuron hs 18 from shared layer of SP-MTL, while the blue one is used to show the behaviour of neuron hs 21, which belongs to the shared layer of our model. Model Shared Layer Task-Movie Task-Baby SP-MTL good, great bad, love, simple, cut, slow, cheap, infantile good, great, well-directed, pointless, cut, cheap, infantile love, bad, cute, safety, mild, broken simple ASP-MTL good, great, love, bad poor well-directed, pointless, cut, cheap, infantile cute, safety, mild, broken simple Table 4: Typical patterns captured by shared layer and task-specific layer of SP-MTL and ASP-MTL models on Movie and Baby tasks. neurons from shared layer and task-specific layer in Table 4, and we have observed that: 1) for SP-MTL, if some patterns are captured by taskspecific layer, they are likely to be placed into shared space. Clearly, suppose we have many tasks to be trained jointly, the shared layer bear much pressure and must sacrifice substantial amount of capacity to capture the patterns they actually do not need. Furthermore, some typical taskinvariant features also go into task-specific layer. 2) for ASP-MTL, we find the features captured by shared and task-specific layer have a small amount of intersection, which allows these two kinds of layers can work effectively. 6 Related Work There are two threads of related work. One thread is multi-task learning with neural network. Neural networks based multi-task learning has been proven effective in many NLP problems (Collobert and Weston, 2008; Glorot et al., 2011). Liu et al. (2016c) first utilizes different LSTM layers to construct multi-task learning framwork for text classification. Liu et al. (2016b) proposes a generic multi-task framework, in which different tasks can share information by an external memory and communicate by a reading/writing mechanism. These work has potential limitation of just learning a shared space solely on sharing parameters, while our model introduce two strategies to learn the clear and non-redundant shared-private space. Another thread of work is adversarial network. Adversarial networks have recently surfaced as a general tool measure equivalence between distributions and it has proven to be effective in a variety of tasks. Ajakan et al. (2014); Bousmalis et al. (2016) applied adverarial training to domain adaptation, aiming at transferring the knowledge of one source domain to target domain. Park and Im (2016) proposed a novel approach for multimodal representation learning which uses adversarial back-propagation concept. Different from these models, our model aims to find task-invariant sharable information for multiple related tasks using adversarial training strategy. Moreover, we extend binary adversarial training to multi-class, which enable multiple tasks to be jointly trained. 7 Conclusion In this paper, we have proposed an adversarial multi-task learning framework, in which the taskspecific and task-invariant features are learned non-redundantly, therefore capturing the sharedprivate separation of different tasks. We have demonstrated the effectiveness of our approach by applying our model to 16 different text classification tasks. We also perform extensive qualitative 8 analysis, deriving insights and indirectly explaining the quantitative improvements in the overall performance. Acknowledgments We would like to thank the anonymous reviewers for their valuable comments and thank Kaiyu Qian, Gang Niu for useful discussions. This work was partially funded by National Natural Science Foundation of China (No. 61532011 and 61672162), the National High Technology Research and Development Program of China (No. 2015AA015408), Shanghai Municipal Science and Technology Commission (No. 16JC1420401). References Hana Ajakan, Pascal Germain, Hugo Larochelle, Franc¸ois Laviolette, and Mario Marchand. 2014. Domain-adversarial neural networks. arXiv preprint arXiv:1412.4446 . Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. 2010. A theory of learning from different domains. Machine learning 79(1-2):151–175. Shai Ben-David, John Blitzer, Koby Crammer, Fernando Pereira, et al. 2007. Analysis of representations for domain adaptation. Advances in neural information processing systems 19:137. John Blitzer, Mark Dredze, Fernando Pereira, et al. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In ACL. volume 7, pages 440–447. Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan. 2016. Domain separation networks. In Advances in Neural Information Processing Systems. pages 343– 351. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 . Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of ICML. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The JMLR 12:2493–2537. Jeffrey L Elman. 1990. Finding structure in time. Cognitive science 14(2):179–211. Yaroslav Ganin and Victor Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15). pages 1180–1189. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Proceedings of the 28th International Conference on Machine Learning (ICML-11). pages 513–520. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems. pages 2672–2680. Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850 . Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Yangqing Jia, Mathieu Salzmann, and Trevor Darrell. 2010. Factorized latent spaces with structured sparsity. In Advances in Neural Information Processing Systems. pages 982–990. Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. 2015. An empirical exploration of recurrent network architectures. In Proceedings of The 32nd International Conference on Machine Learning. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. In Proceedings of ACL. Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. arXiv preprint arXiv:1703.03130 . Pengfe Liu, Xipeng Qiu, Jifan Chen, and Xuanjing Huang. 2016a. Deep fusion LSTMs for text semantic matching. In Proceedings of ACL. PengFei Liu, Xipeng Qiu, Xinchi Chen, Shiyu Wu, and Xuanjing Huang. 2015a. Multi-timescale long short-term memory neural network for modelling sentences and documents. In Proceedings of the Conference on EMNLP. Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2016b. Deep multi-task learning with shared memory. In Proceedings of EMNLP. PengFei Liu, Xipeng Qiu, and Xuanjing Huang. 2016c. Recurrent neural network for text classification with multi-task learning. In Proceedings of International Joint Conference on Artificial Intelligence. 9 Xiaodong Liu, Jianfeng Gao, Xiaodong He, Li Deng, Kevin Duh, and Ye-Yi Wang. 2015b. Representation learning using multi-task deep neural networks for semantic classification and information retrieval. In NAACL. Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2015. Multi-task sequence to sequence learning. arXiv preprint arXiv:1511.06114 . Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the ACL. pages 142–150. Ishan Misra, Abhinav Shrivastava, Abhinav Gupta, and Martial Hebert. 2016. Cross-stitch networks for multi-task learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pages 3994–4003. Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43rd annual meeting on association for computational linguistics. Association for Computational Linguistics, pages 115–124. Gwangbeen Park and Woobin Im. 2016. Image-text multi-modal representation learning by adversarial backpropagation. arXiv preprint arXiv:1612.08354 . Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. Proceedings of the EMNLP 12:1532– 1543. Mathieu Salzmann, Carl Henrik Ek, Raquel Urtasun, and Trevor Darrell. 2010. Factorized orthogonal latent spaces. In AISTATS. pages 701–708. Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of EMNLP. Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence learning with neural networks. In Advances in NIPS. pages 3104–3112. Yaniv Taigman, Adam Polyak, and Lior Wolf. 2016. Unsupervised cross-domain image generation. arXiv preprint arXiv:1611.02200 . Zhanpeng Zhang, Ping Luo, Chen Change Loy, and Xiaoou Tang. 2014. Facial landmark detection by deep multi-task learning. In European Conference on Computer Vision. Springer, pages 94–108. 10
2017
1
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 102–111 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1010 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 102–111 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1010 Generating and Exploiting Large-scale Pseudo Training Data for Zero Pronoun Resolution Ting Liu†, Yiming Cui‡, Qingyu Yin†, Weinan Zhang†, Shijin Wang‡ and Guoping Hu‡ †Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology, Harbin, China ‡iFLYTEK Research, Beijing, China †{tliu,qyyin,wnzhang}@ir.hit.edu.cn ‡{ymcui,sjwang3,gphu}@iflytek.com Abstract Most existing approaches for zero pronoun resolution are heavily relying on annotated data, which is often released by shared task organizers. Therefore, the lack of annotated data becomes a major obstacle in the progress of zero pronoun resolution task. Also, it is expensive to spend manpower on labeling the data for better performance. To alleviate the problem above, in this paper, we propose a simple but novel approach to automatically generate large-scale pseudo training data for zero pronoun resolution. Furthermore, we successfully transfer the cloze-style reading comprehension neural network model into zero pronoun resolution task and propose a two-step training mechanism to overcome the gap between the pseudo training data and the real one. Experimental results show that the proposed approach significantly outperforms the state-of-the-art systems with an absolute improvements of 3.1% F-score on OntoNotes 5.0 data. 1 Introduction Previous works on zero pronoun (ZP) resolution mainly focused on the supervised learning approaches (Han, 2006; Zhao and Ng, 2007; Iida et al., 2007; Kong and Zhou, 2010; Iida and Poesio, 2011; Chen and Ng, 2013). However, a major obstacle for training the supervised learning models for ZP resolution is the lack of annotated data. An important step is to organize the shared task on anaphora and coreference resolution, such as the ACE evaluations, SemEval-2010 shared task on Coreference Resolution in Multiple Languages (Marta Recasens, 2010) and CoNLL2012 shared task on Modeling Multilingual Unrestricted Coreference in OntoNotes (Sameer Pradhan, 2012). Following these shared tasks, the annotated evaluation data can be released for the following researches. Despite the success and contributions of these shared tasks, it still faces the challenge of spending manpower on labeling the extended data for better training performance and domain adaptation. To address the problem above, in this paper, we propose a simple but novel approach to automatically generate large-scale pseudo training data for zero pronoun resolution. Inspired by data generation on cloze-style reading comprehension, we can treat the zero pronoun resolution task as a special case of reading comprehension problem. So we can adopt similar data generation methods of reading comprehension to the zero pronoun resolution task. For the noun or pronoun in the document, which has the frequency equal to or greater than 2, we randomly choose one position where the noun or pronoun is located on, and replace it with a specific symbol ⟨blank⟩. Let query Q and answer A denote the sentence that contains a ⟨blank⟩, and the noun or pronoun which is replaced by the ⟨blank⟩, respectively. Thus, a pseudo training sample can be represented as a triple: ⟨D, Q, A⟩ (1) For the zero pronoun resolution task, a ⟨blank⟩ represents a zero pronoun (ZP) in query Q, and A indicates the corresponding antecedent of the ZP. In this way, tremendous pseudo training samples can be generated from the various documents, such as news corpus. Towards the shortcomings of the previous approaches that are based on feature engineering, we propose a neural network architecture, which is an attention-based neural network model, for zero pronoun resolution. Also we propose a two-step 102 training method, which benefit from both largescale pseudo training data and task-specific data, showing promising performance. To sum up, the contributions of this paper are listed as follows. • To our knowledge, this is the first time that utilizing reading comprehension neural network model into zero pronoun resolution task. • We propose a two-step training approach, namely pre-training-then-adaptation, which benefits from both the large-scale automatically generated pseudo training data and taskspecific data. • Towards the shortcomings of the feature engineering approaches, we first propose an attention-based neural network model for zero pronoun resolution. 2 The Proposed Approach In this section, we will describe our approach in detail. First, we will describe our method of generating large-scale pseudo training data for zero pronoun resolution. Then we will introduce twostep training approach to alleviate the gaps between pseudo and real training data. Finally, the attention-based neural network model as well as associated unknown words processing techniques will be described. 2.1 Generating Pseudo Training Data In order to get large quantities of training data for neural network model, we propose an approach, which is inspired by (Hermann et al., 2015), to automatically generate large-scale pseudo training data for zero pronoun resolution. However, our approach is much more simple and general than that of (Hermann et al., 2015). We will introduce the details of generating the pseudo training data for zero pronoun resolution as follows. First, we collect a large number of documents that are relevant (or homogenous in some sense) to the released OntoNote 5.0 data for zero pronoun resolution task in terms of its domain. In our experiments, we used large-scale news data for training. Given a certain document D, which is composed by a set of sentences D = {s1, s2, ..., sn}, we randomly choose an answer word A in the document. Note that, we restrict A to be either a noun or pronoun, where the part-of-speech is identified using LTP Toolkit (Che et al., 2010), as well as the answer word should appear at least twice in the document. Second, after the answer word A is chosen, the sentence that contains A is defined as a query Q, in which the answer word A is replaced by a specific symbol ⟨blank⟩. In this way, given the query Q and document D, the target of the prediction is to recover the answer A. That is quite similar to the zero pronoun resolution task. Therefore, the automatically generated training samples is called pseudo training data. Figure 1 shows an example of a pseudo training sample. In this way, we can generate tremendous triples of ⟨D, Q, A⟩for training neural network, without making any assumptions on the nature of the original corpus. 2.2 Two-step Training It should be noted that, though we have generated large-scale pseudo training data for neural network training, there is still a gap between pseudo training data and the real zero pronoun resolution task in terms of the query style. So we should do some adaptations to our model to deal with the zero pronoun resolution problems ideally. In this paper, we used an effective approach to deal with the mismatch between pseudo training data and zero pronoun resolution task-specific data. Generally speaking, in the first stage, we use a large amount of the pseudo training data to train a fundamental model, and choose the best model according to the validation accuracy. Then we continue to train from the previous best model using the zero pronoun resolution task-specific training data, which is exactly the same domain and query type as the standard zero pronoun resolution task data. The using of the combination of proposed pseudo training data and task-specific data, i.e. zero pronoun resolution task data, is far more effective than using either of them alone. Though there is a gap between these two data, they share many similar characteristics to each other as illustrated in the previous part, so it is promising to utilize these two types of data together, which will compensate to each other. The two-step training procedure can be concluded as, 103 Document: 1 ||| welcome both of you to the studio to participate in our program , 欢迎两位呢来演播室参与我们的节目, 2 ||| it happened that i was going to have lunch with a friend at noon . 正好因为我也和朋友这个,这个中午一起吃饭。 3 ||| after that , i received an sms from 1860 . 然后我就收到1860 的短信。 4 ||| uh-huh , it was by sms . 嗯,是通过短信的方式, 5 ||| uh-huh , that means , er , you knew about the accident through the source of radio station . 嗯,就是说呃你是通过台里面的一个信息的渠道知道这儿出了这样的事故。 6 ||| although we live in the west instead of the east part , and it did not affect us that much , 虽然我们生活在西部不是在东部,对我们影响不是很大, 7 ||| but i think it is very useful to inform people using sms . 但是呢,我觉得有这样一个短信告诉大家呢是非常有用的啊。 Query: 8 ||| some car owners said that <blank> was very good。 有车主表示,说这<blank> 非常的好。 Answer: sms 短信 Figure 1: Example of pseudo training sample for zero pronoun resolution system. (The original data is in Chinese, we translate this sample into English for clarity) • Pre-training stage: by using large-scale training data to train the neural network model, we can learn richer word embeddings, as well as relatively reasonable weights in neural networks than just training with a small amount of zero pronoun resolution task training data; • Adaptation stage: after getting the best model that is produced in the previous step, we continue to train the model with task-specific data, which can force the previous model to adapt to the new data, without losing much knowledge that has learned in the previous stage (such as word embeddings). As we will see in the experiment section that the proposed two-step training approach is effective and brings significant improvements. 2.3 Attention-based Neural Network Model Our model is primarily an attention-based neural network model, which is similar to Attentive Reader proposed by (Hermann et al., 2015). Formally, when given a set of training triple ⟨D, Q, A⟩, we will construct our network in the following way. Firstly, we project one-hot representation of document D and query Q into a continuous space with the shared embedding matrix We. Then we input these embeddings into different bidirectional RNN to get their contextual representations respectively. In our model, we used the bidirectional Gated Recurrent Unit (GRU) as RNN implementation (Cho et al., 2014). e(x) = We · x, where x ∈D, Q (2) −→ hs = −−−→ GRU(e(x)); ←− hs = ←−−− GRU(e(x)) (3) hs = [−→ hs; ←− hs] (4) For the query representation, instead of concatenating the final forward and backward states as its representations, we directly get an averaged representations on all bi-directional RNN slices, which can be illustrated as hquery = 1 n n X t=1 hquery(t) (5) For the document, we place a soft attention over all words in document (Bahdanau et al., 2014), which indicate the degree to which part of document is attended when filling the blank in the query sentence. Then we calculate a weighted sum of all document tokens to get the attended representation of document. m(t) = tanh(W · hdoc(t) + U · hquery) (6) α(t) = exp(Ws · m(t)) nP j=1 exp(Ws · m(j)) (7) hdoc att = hdoc · α (8) where variable α(t) is the normalized attention weight at tth word in document, hdoc is a matrix that concatenate all hdoc(t) in sequence. hdoc = concat[hdoc(1), hdoc(2), ..., hdoc(t)] (9) Then we use attended document representation and query representation to estimate the final answer, which can be illustrated as follows, where V 104 Bi-GRU Encoder Σ d1 d2 d3 d4 q1 q2 q3 Query Softmax Layer Concat Layer AttentionLayer Answer Document Embedding Layer Figure 2: Architecture of attention-based neural network model for zero pronoun resolution task. is the vocabulary, r = concat[hdoc att, hquery] (10) P(A|D, Q) ∝softmax(Wr · r) , s.t. A ∈V (11) Figure 2 shows the proposed neural network architecture. Note that, for zero pronoun resolution task, antecedents of zero pronouns are always noun phrases (NPs), while our model generates only one word (a noun or a pronoun) as the result. To better adapt our model to zero pronoun resolution task, we further process the output result in the following procedure. First, for a given zero pronoun, we extract a set of NPs as its candidates utilizing the same strategy as (Chen and Ng, 2015). Then, we use our model to generate an answer (one word) for the zero pronoun. After that, we go through all the candidates from the nearest to the far-most. For an NP candidate, if the produced answer is its head word, we then regard this NP as the antecedent of the given zero pronoun. By doing so, for a given zero pronoun, we generate an NP as the prediction of its antecedent. 2.4 Unknown Words Processing Because of the restriction on both memory occupation and training time, it is usually suggested to use a shortlist of vocabulary in neural network training. However, we often replace the out-ofvocabularies to a unique special token, such as ⟨unk⟩. But this may place an obstacle in real world test. When the model predicts the answer as ⟨unk⟩, we do not know what is the exact word it represents in the document, as there may have many ⟨unk⟩s in the document. In this paper, we propose to use a simple but effective way to handle unknown words issue. The idea is straightforward, which can be illustrated as follows. • Identify all unknown words inside of each ⟨D, Q, A⟩; • Instead of replacing all these unknown words into one unique token ⟨unk⟩, we make a hash table to project these unique unknown words to numbered tokens, such as ⟨unk1⟩, ⟨unk2⟩, ..., ⟨unkN⟩in terms of its occurrence order in the document. Note that, the same words are projected to the same unknown word tokens, and all these projections are only valid inside of current sample. For example, ⟨unk1⟩indicate the first unknown word, say “apple”, in the current sample, but in another sample the ⟨unk1⟩may indicate the unknown word “orange”. That is, the unknown word labels are indicating position features rather than the exact word; • Insert these unknown marks in the vocabulary. These marks may only take up dozens of slots, which is negligible to the size of shortlists (usually 30K ∼100K). (a) The weather today is not as pleasant as the weather of yesterday. (b) The <unk> today is not as <unk> as the <unk> of yesterday. (c) The <unk1> today is not as <unk2> as the <unk1> of yesterday. Figure 3: Example of unknown words processing. a) original sentence; b) original unknown words processing method; c) our method We take one sentence “The weather of today is not as pleasant as the weather of yesterday.” as an example to show our unknown word processing method, which is shown in Figure 3. If we do not discriminate the unknown words and assign different unknown words with the same token ⟨unk⟩, it would be impossible for us to know what is the exact word that ⟨unk⟩represents for in the real test. However, when using our proposed unknown word processing method, if the model predicts a ⟨unkX⟩as the answer, 105 we can simply scan through the original document and identify its position according to its unknown word number X and replace the ⟨unkX⟩with the real word. For example, in Figure 3, if we adopt original unknown words processing method, we could not know whether the ⟨unk⟩is the word “weather” or “pleasant”. However, when using our approach, if the model predicts an answer as ⟨unk1⟩, from the original text, we can know that ⟨unk1⟩represents the word “weather”. 3 Experiments 3.1 Data In our experiments, we choose a selection of public news data to generate large-scale pseudo training data for pre-training our neural network model (pre-training step)1. In the adaptation step, we used the official dataset OntoNotes Release 5.02 which is provided by CoNLL-2012 shared task, to carry out our experiments. The CoNLL2012 shared task dataset consists of three parts: a training set, a development set and a test set. The datasets are made up of 6 different domains, namely Broadcast News (BN), Newswires (NW), Broadcast Conversations (BC), Telephone Conversations (TC), Web Blogs (WB), and Magazines (MZ). We closely follow the experimental settings as (Kong and Zhou, 2010; Chen and Ng, 2014, 2015, 2016), where we treat the training set for training and the development set for testing, because only the training and development set are annotated with ZPs. The statistics of training and testing data is shown in Table 1 and 2 respectively. Sentences # Query # General Train 18.47M 1.81M Domain Train 122.8K 9.4K Validation 11,191 2,667 Table 1: Statistics of training data, including pseudo training data and OntoNotes 5.0 training data. 3.2 Neural Network Setups Training details of our neural network models are listed as follows. 1The news data is available at http://www.sogou. com/labs/dl/cs.html 2http://catalog.ldc.upenn.edu/ LDC2013T19 Docs Sentences Words AZPs Test 172 6,083 110K 1,713 Table 2: Statistics of test set (OntoNotes 5.0 development data). • Embedding: We use randomly initialized embedding matrix with uniformed distribution in the interval [-0.1,0.1], and set units number as 256. No pre-trained word embeddings are used. • Hidden Layer: We use GRU with 256 units, and initialize the internal matrix by random orthogonal matrices (Saxe et al., 2013). As GRU still suffers from the gradient exploding problem, we set gradient clipping threshold to 10. • Vocabulary: As the whole vocabulary is very large (over 800K), we set a shortlist of 100K according to the word frequency and unknown words are mapped to 20 ⟨unkX⟩using the proposed method. • Optimization: We used ADAM update rule (Kingma and Ba, 2014) with an initial learning rate of 0.001, and used negative loglikelihood as the training objective. The batch size is set to 32. All models are trained on Tesla K40 GPU. Our model is implemented with Theano (Theano Development Team, 2016) and Keras (Chollet, 2015). 3.3 Experimental results Same to the previous researches that are related to zero pronoun resolution, we evaluate our system performance in terms of F-score (F). We focus on AZP resolution process, where we assume that gold AZPs and gold parse trees are given3. The same experimental setting is utilized in (Chen and Ng, 2014, 2015, 2016). The overall results are shown in Table 3, where the performances of each domain are listed in detail and overall performance is also shown in the last column. • Overall Performance We employ four Chinese ZP resolution baseline systems on OntoNotes 5.0 dataset. As we can 3All gold information are provided by the CoNLL-2012 shared task dataset 106 NW (84) MZ (162) WB (284) BN (390) BC (510) TC (283) Overall Kong and Zhou (2010) 34.5 32.7 45.4 51.0 43.5 48.4 44.9 Chen and Ng (2014) 38.1 31.0 50.4 45.9 53.8 54.9 48.7 Chen and Ng (2015) 46.4 39.0 51.8 53.8 49.4 52.7 50.2 Chen and Ng (2016) 48.8 41.5 56.3 55.4 50.8 53.1 52.2 Our Approach† 59.2 51.3 60.5 53.9 55.5 52.9 55.3 Table 3: Experimental result (F-score) on the OntoNotes 5.0 test data. The best results are marked with bold face. † indicates that our approach is statistical significant over the baselines (using t-test, with p < 0.05). The number in the brackets indicate the number of AZPs. see that our model significantly outperforms the previous state-of-the-art system (Chen and Ng, 2016) by 3.1% in overall F-score, and substantially outperform the other systems by a large margin. When observing the performances of different domains, our approach also gives relatively consistent improvements among various domains, except for BN and TC with a slight drop. All these results approve that our proposed approach is effective and achieves significant improvements in AZP resolution. In our quantitative analysis, we investigated the reasons of the declines in the BN and TC domain. A primary observation is that the word distributions in these domains are fairly different from others. The average document length of BN and TC are quite longer than other domains, which suggest that there is a bigger chance to have unknown words than other domains, and add difficulties to the model training. Also, we have found that in the BN and TC domains, the texts are often in oral form, which means that there are many irregular expressions in the context. Such expressions add noise to the model, and it is difficult for the model to extract useful information in these contexts. These phenomena indicate that further improvements can be obtained by filtering stop words in contexts, or increasing the size of task-specific data, while we leave this in the future work. • Effect of UNK processing As we have mentioned in the previous section, traditional unknown word replacing methods are vulnerable to the real word test. To alleviate this issue, we proposed the UNK processing mechanism to recover the UNK tokens to the real words. In Table 4, we compared the performance that with and without the proposed UNK processing, to show whether the proposed UNK processing method is effective. As we can see that, by applying our UNK processing mechanism, the model do learned the positional features in these lowfrequency words, and brings over 3% improvements in F-score, which indicated the effectiveness of our UNK processing approach. F-score Without UNK replacement 52.2 With UNK replacement 55.3 Table 4: Performance comparison on whether using the proposed unknown words processing. • Effect of Domain Adaptation We also tested out whether our domain adaptation method is effective. In this experiments, we used three different types of training data: only pseudo training data, only task-specific data, and our adaptation method, i.e. using pseudo training data in the pre-training step and task-specific data for domain adaptation step. The results are given in Table 5. As we can see that, using either pseudo training data or task-specific data alone can not bring inspiring result. By adopting our domain adaptation method, the model could give significant improvements over the other models, which demonstrate the effectiveness of our proposed two-step training approach. An intuition behind this phenomenon is that though pseudo training data is fairly big enough to train a reliable model parameters, there is still a gap to the real zero pronoun resolution tasks. On the contrary, though task-specific training data is exactly the same type as the real test, the quantity is not enough to train a reasonable model (such as word embedding). So it is better to make use of both to 107 take the full advantage. However, as the original task-specific data is fairly small compared to pseudo training data, we also wondered if the large-scale pseudo training data is only providing rich word embedding information. So we use the large-scale pseudo training data for embedding training using GloVe toolkit (Pennington et al., 2014), and initialize the word embeddings in the “only task-specific data” system. From the result we can see that the pseudo training data provide more information than word embeddings, because though we used GloVe embeddings in “only task-specific data”, it still can not outperform the system that uses domain adaptation which supports our claim. F-score Only Pseudo Training Data 41.1 Only Task-Specific Data 44.2 Only Task-Specific Data + GloVe 50.9 Domain Adaptation 55.3 Table 5: Performance comparison of using different training data. 4 Error Analysis To better evaluate our proposed approach, we performed a qualitative analysis of errors, where two major errors are revealed by our analysis, as discussed below. 4.1 Effect of Unknown Words Our approach does not do well when there are lots of ⟨unk⟩s in the context of ZPs, especially when the ⟨unk⟩s appears near the ZP. An example is given below, where words with # are regarded as ⟨unk⟩s in our model. φ 登上# 太平山# 顶, 将香港岛# 和维多 利亚港# 的美景尽收眼底。 φ Successfully climbed# the peak of [Taiping Mountain]#, to have a panoramic view of the beauty of [Hong Kong Island]# and [Victoria Harbour]#. In this case, the words “登上/climbed” and “太 平山/Taiping Mountain” that appears immediately after the ZP “φ” are all regarded as ⟨unk⟩s in our model. As we model the sequence of words by RNN, the ⟨unk⟩s make the model more difficult to capture the semantic information of the sentence, which in turn influence the overall performance. Especially for the words that are near the ZP, which play important roles when modeling context information for the ZP. By looking at the word “顶/peak”, it is hard to comprehend the context information, due to the several surrounding ⟨unk⟩s. Though our proposed unknown words processing method is effective in empirical evaluation, we think that more advanced method for unknown words processing would be of a great help in improving comprehension of the context. 4.2 Long Distance Antecedents Also, our model makes incorrect decisions when the correct antecedents of ZPs are in long distance. As our model chooses answer from words in the context, if there are lots of words between the ZP and its antecedent, more noise information are introduced, and adds more difficulty in choosing the right answer. For example: 我帮不了那个人... ... 那天结束后φ 回到 家中。 I can’t help that guy ... ... After that day, φ return home. In this case, the correct antecedent of ZP “φ” is the NP candidate “我/I”. By seeing the contexts, we observe that there are over 30 words between the ZP and its antecedent. Although our model does not intend to fill the ZP gap only with the words near the ZP, as most of the antecedents appear just a few words before the ZPs, our model prefers the nearer words as correct antecedents. Hence, once there are lots of words between ZP and its nearest antecedent, our model can sometimes make wrong decisions. To correctly handle such cases, our model should learn how to filter the useless words and enhance the learning of longterm dependency. 5 Related Work 5.1 Zero pronoun resolution For Chinese zero pronoun (ZP) resolution, early studies employed heuristic rules to Chinese ZP resolution. Converse (2006) proposes a rule-based method to resolve the zero pronouns, by utilizing Hobbs algorithm (Hobbs, 1978) in the CTB documents. Then, supervised approaches to this task have been vastly explored. Zhao and Ng (2007) first present a supervised machine learning approach to the identification and resolution of Chinese ZPs. Kong and Zhou (2010) develop a tree-kernel based approach for Chinese ZP resolution. More recently, unsupervised approaches 108 have been proposed. Chen and Ng (2014) develop an unsupervised language-independent approach, utilizing the integer linear programming to using ten overt pronouns. Chen and Ng (2015) propose an end-to-end unsupervised probabilistic model for Chinese ZP resolution, using a salience model to capture discourse information. Also, there have been many works on ZP resolution for other languages. These studies can be divided into rule-based and supervised machine learning approaches. Ferr´andez and Peral (2000) proposed a set of hand-crafted rules for Spanish ZP resolution. Recently, supervised approaches have been exploited for ZP resolution in Korean (Han, 2006) and Japanese (Isozaki and Hirao, 2003; Iida et al., 2006, 2007; Sasano and Kurohashi, 2011). Iida and Poesio (2011) developed a cross-lingual approach for Japanese and Italian ZPs where an ILPbased model was employed to zero anaphora detection and resolution. In sum, most recent researches on ZP resolution are supervised approaches, which means that their performance highly relies on large-scale annotated data. Even for the unsupervised approach (Chen and Ng, 2014), they also utilize a supervised pronoun resolver to resolve ZPs. Therefore, the advantage of our proposed approach is obvious. We are able to generate large-scale pseudo training data for ZP resolution, and also we can benefit from the task-specific data for fine-tuning via the proposed two-step training approach. 5.2 Cloze-style Reading Comprehension Our neural network model is mainly motivated by the recent researches on cloze-style reading comprehension tasks, which aims to predict one-word answer given the document and query. These models can be seen as a general model of mining the relations between the document and query, so it is promising to combine these models to the specific domain. A representative work of cloze-style reading comprehension is done by Hermann et al. (2015). They proposed a methodology for obtaining large quantities of ⟨D, Q, A⟩triples. By using this method, a large number of training data can be obtained without much human intervention, and make it possible to train a reliable neural network. They used attention-based neural networks for this task. Evaluation on CNN/DailyMail datasets showed that their approach is much effective than traditional baseline systems. While our work is similar to Hermann et al. (2015), there are several differences which can be illustrated as follows. Firstly, though we both utilize the large-scale corpus, they require that the document should accompany with a brief summary of it, while this is not always available in most of the document, and it may place an obstacle in generating limitless training data. In our work, we do not assume any prerequisite of the training data, and directly extract queries from the document, which makes it easy to generate large-scale training data. Secondly, their work mainly focuses on reading comprehension in the general domain. We are able to exploit large-scale training data for solving problems in the specific domain, and we proposed two-step training method which can be easily adapted to other domains as well. 6 Conclusion In this study, we propose an effective way to generate and exploit large-scale pseudo training data for zero pronoun resolution task. The main idea behind our approach is to automatically generate large-scale pseudo training data and then utilize an attention-based neural network model to resolve zero pronouns. For training purpose, two-step training approach is employed, i.e. a pre-training and adaptation step, and this can be also easily applied to other tasks as well. The experimental results on OntoNotes 5.0 corpus are encouraging, showing that the proposed model and accompanying approaches significantly outperforms the stateof-the-art systems. The future work will be carried out on two main aspects: First, as experimental results show that the unknown words processing is a critical part in comprehending context, we will explore more effective way to handle the UNK issue. Second, we will develop other neural network architecture to make it more appropriate for zero pronoun resolution task. Acknowledgements We would like to thank the anonymous reviewers for their thorough reviewing and proposing thoughtful comments to improve our paper. This work was supported by the National 863 Leading Technology Research Project via grant 2015AA015407, Key Projects of National Natural Science Foundation of China via grant 61632011, 109 and National Natural Science Youth Foundation of China via grant 61502120. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 . Wanxiang Che, Zhenghua Li, and Ting Liu. 2010. Ltp: A chinese language technology platform. In Proceedings of the 23rd International Conference on Computational Linguistics: Demonstrations. Association for Computational Linguistics, pages 13–16. Chen Chen and Vincent Ng. 2013. Chinese zero pronoun resolution: Some recent advances. In EMNLP. pages 1360–1365. Chen Chen and Vincent Ng. 2014. Chinese zero pronoun resolution: An unsupervised approach combining ranking and integer linear programming. In Twenty-Eighth AAAI Conference on Artificial Intelligence. Chen Chen and Vincent Ng. 2015. Chinese zero pronoun resolution: A joint unsupervised discourseaware model rivaling state-of-the-art resolvers. In Proceedings of the 53rd Annual Meeting of the ACL and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). page 320. Chen Chen and Vincent Ng. 2016. Chinese zero pronoun resolution with deep neural networks. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 778–788. http://aclweb.org/anthology/P161074. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 . Franc¸ois Chollet. 2015. Keras. https://github. com/fchollet/keras. Susan P Converse. 2006. Pronominal anaphora resolution in chinese . Antonio Ferr´andez and Jes´us Peral. 2000. A computational approach to zero-pronouns in spanish. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics, pages 166–172. Na-Rae Han. 2006. Korean zero pronouns: analysis and resolution. Ph.D. thesis, Citeseer. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems. pages 1684– 1692. Jerry R Hobbs. 1978. Resolving pronoun references. Lingua 44(4):311–338. Ryu Iida, Kentaro Inui, and Yuji Matsumoto. 2006. Exploiting syntactic patterns as clues in zero-anaphora resolution. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 625–632. Ryu Iida, Kentaro Inui, and Yuji Matsumoto. 2007. Zero-anaphora resolution by learning rich syntactic pattern features. ACM Transactions on Asian Language Information Processing (TALIP) 6(4):1. Ryu Iida and Massimo Poesio. 2011. A cross-lingual ilp solution to zero anaphora resolution. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1. Association for Computational Linguistics, pages 804–813. Hideki Isozaki and Tsutomu Hirao. 2003. Japanese zero pronoun resolution based on ranking rules and machine learning. In Proceedings of the 2003 conference on Empirical methods in natural language processing. Association for Computational Linguistics, pages 184–191. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . Fang Kong and Guodong Zhou. 2010. A tree kernelbased unified framework for chinese zero anaphora resolution. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 882–891. Lluis Marquez Emili Sapena M Antonia Marti Mariona Taule Veronique Hoste Massimo Poesio Yannick Versley Marta Recasens. 2010. Semeval-2010 task 1: Coreference resolution in multiple languages . Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, pages 1532–1543. http://aclweb.org/anthology/D14-1162. Alessandro Moschitti Nianwen Xue Olga Uryupina Yuchen Zhang Sameer Pradhan. 2012. Conll-2012 shared task: Modeling multilingual unrestricted coreference in ontonotes . 110 Ryohei Sasano and Sadao Kurohashi. 2011. A discriminative approach to japanese zero anaphora resolution with large-scale lexicalized case frames. In IJCNLP. pages 758–766. Andrew M Saxe, James L McClelland, and Surya Ganguli. 2013. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120 . Theano Development Team. 2016. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints abs/1605.02688. http://arxiv.org/abs/1605.02688. Shanheng Zhao and Hwee Tou Ng. 2007. Identification and resolution of chinese zero pronouns: A machine learning approach. In EMNLP-CoNLL. volume 2007, pages 541–550. 111
2017
10
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1084–1094 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1100 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1084–1094 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1100 Supervised Learning of Automatic Pyramid for Optimization-Based Multi-Document Summarization Maxime Peyrard and Judith Eckle-Kohler Research Training Group AIPHES and UKP Lab Computer Science Department, Technische Universit¨at Darmstadt www.aiphes.tu-darmstadt.de, www.ukp.tu-darmstadt.de Abstract We present a new supervised framework that learns to estimate automatic Pyramid scores and uses them for optimizationbased extractive multi-document summarization. For learning automatic Pyramid scores, we developed a method for automatic training data generation which is based on a genetic algorithm using automatic Pyramid as the fitness function. Our experimental evaluation shows that our new framework significantly outperforms strong baselines regarding automatic Pyramid, and that there is much room for improvement in comparison with the upperbound for automatic Pyramid. 1 Introduction We consider extractive text summarization, the task of condensing a textual source, e.g., a set of source documents in multi-document summarization (MDS), into a short summary text. The quality of an automatic system summary is traditionally evaluated by comparing it against one or more reference summaries written by humans. This comparison is performed by means of an evaluation metric measuring indicators of summary quality and combining them into an aggregated score. Many state-of-the-art summarization systems cast extractive summarization as an optimization problem and maximize an objective function in order to create good, i.e., high-scoring summaries. To this end, optimization-based systems commonly use an objective function which encodes exactly those quality indicators which are measured by the particular evaluation metric being used. Some systems even employ an approximation of the evaluation metric as objective function. Consider as an example the ROUGE metric which has become a de-facto standard for summary evaluation (Lin, 2004). ROUGE computes the n-gram overlap between a system summary and a pool of reference summaries. There are several previous approaches which have used an approximation of ROUGE as the optimization objective (e.g., Sipos et al. (2012); Peyrard and EckleKohler (2016a)). However, ROUGE has been widely criticized for being too simplistic and not suitable for capturing important quality aspects we are interested in. In particular, ROUGE does not capture sentences which are semantically equivalent but expressed with different words (Nenkova et al., 2007). Ideally, we would like to evaluate our summaries based on human judgments. A well-known example of such a human evaluation method is the so-called Pyramid method (Nenkova et al., 2007): it evaluates the particular quality aspect of content selection and is based on a manual comparison of Summary Content Units (SCUs) in reference summaries against SCUs in system summaries. While the resulting Pyramid score is much more meaningful and informative than ROUGE, it is very expensive to obtain, and – worse – not reproducible. These issues have been addressed by a line of research aimed at automating the Pyramid evaluation (Harnly et al., 2005; Passonneau et al., 2013). Recently, Yang et al. (2016) developed a freely available off-the-shelf system for automatic Pyramid scoring called PEAK, which uses open Information Extraction (open IE) propositions as SCUs and relies on proposition comparison. Automatic Pyramid (AP) scores are reproducible, and unlike ROUGE, they are based on semantically motivated content units (SCUs) rather than word n-grams. Moreover, they correlate better with human judgments than ROUGE (Yang et al., 2016). Given these recent advances in the automatic 1084 evaluation of summaries regarding content selection, we believe that research in optimizationbased summarization should move away from ROUGE towards AP as a more meaningful evaluation metric to approximate and to optimize. In our work, we are the first to explore this new direction and to systematically investigate the use of AP in optimization-based extractive summarization. We make the following contributions: • We compute an upper-bound for AP with a Genetic Algorithm (GA), and compare it to the ROUGE upper-bound. • We develop a new extractive MDS system specifically optimizing for an approximation of AP. Our system uses a supervised learning setup to learn an approximation of AP from automatically generated training data. We constrain the learned approximation of AP to be linear so that we can extract summaries efficiently via Integer Linear Programming (ILP). Our experimental evaluation shows that our approach significantly outperforms strong baselines on the AP metric. The code both for the new upper-bound and for our ILP is available at github.com/UKPLab/ acl2017-optimize_pyramid. 2 Background In this section, we summarize the Pyramid method and the PEAK system, the automated version of Pyramid we consider in this work. Pyramid The Pyramid method (Nenkova et al., 2007) is a manual evaluation method which determines to what extent a system summary covers the content expressed in a set of reference summaries. The comparison of system summary content to reference summary content is performed on the basis of SCUs which correspond to semantically motivated, subsentential units, such as phrases or clauses. The Pyramid method consists of two steps: the creation of a Pyramid set from reference summaries, and second, Pyramid scoring of system summaries based on the Pyramid set. In the first step, humans annotate phrasal content units in the reference summaries and group them into clusters of semantically equivalent phrases. The resulting clusters are called SCUs and the annotators assign an SCU label to each cluster, which is a sentence describing the cluster content in their own words. The final set of SCUs forms the Pyramid set. Each SCU has a weight corresponding to the number of reference summaries in which the SCU appears. Since each SCU must not appear more than once in each reference summary, the maximal weight of an SCU is the total number of reference summaries. In the second step, humans annotate phrasal content units in a system summary and align them to the corresponding SCUs in the Pyramid set. The Pyramid score of a system summary is then calculated as the sum of the SCU weights for all Pyramid set SCUs being aligned to annotated system summary phrases. PEAK The AP system PEAK by Yang et al. (2016) uses clauses as the content expressing units and represents them as propositions in the open IE paradigm. An open IE proposition is a triple of subject, predicate and object phrases. PEAK uses the state-of-the-art system clausIE (Del Corro and Gemulla, 2013) for proposition extraction. While PEAK includes the automatic creation of Pyramid sets from reference summaries, as well as automatic Pyramid scoring of system summaries, in this work, we use PEAK for automatic scoring only. As for the Pyramid sets, we can assume that these have already been created, either via PEAK or by humans (e.g., using the TAC 2009 data1). Since automatic scoring with PEAK requires that the Pyramid sets consist of representative open IE propositions which constitute the automated counterparts of the SCUs, we first need to represent the manually constructed SCUs as open IE propositions, too. To this end, we use clausIE to extract an open IE proposition from each SCU label – a sentence describing the cluster content. As a result, each pyramid set is represented as a list of propositions {pj} with a weight taken from the underlying SCU. For scoring, PEAK processes a system summary with clausIE, converting it from a list of sentences to a list of propositions {si}. A bipartite graph G is constructed, where the two sets of nodes are the summary propositions {si} and the pyramid propositions {pj}. An edge is drawn between si and pj if the similarity is above a given threshold. PEAK computes the similarity with the ADW system (Align, Disambiguate and Walk), a system for computing text similarity based on WordNet, which reaches state-of-the1http://tac.nist.gov/2009/ Summarization 1085 art performance but is slow (Pilehvar et al., 2013). Since each system summary unit can be aligned to at most one SCU, the alignment of the summary propositions {si} and the pyramid propositions {pj} is equivalent to finding a maximum weight matching, which PEAK solves using the Munkres-Kuhn bipartite graph algorithm. From the matched pyramid propositions {pj} the final pyramid score is computed. 3 Approach 3.1 Upper-bound for Automatic Pyramid We start by computing upper-bound summaries according to AP in order to gain a better understanding of the metric. Notations Let D = {si} be a document collection considered as a set of sentences. A summary S is simply a subset of D. We use ppyr to denote the set of propositions in the Pyramid sets extracted from the SCU labels using clausIE. The upper-bound is the set of sentences S∗with the best AP score. Method The task is to extract the set of sentences which contains the propositions matching most of the highest-weighted SCUs, thus resulting in the best matching of propositions, i.e., the highest AP score possible. Formally, we have to solve the following optimization problem: S∗= argmax S AutoPyr(S) (1) Unfortunately, it cannot be solved directly via ILP because of the Munkres-Kuhn bipartite graph algorithm within AP. While Munkres-Kuhn is an ILP, we solve a different problem. In our problem, Munkres-Kuhn would act as constraint because we are looking for the best matching among all valid matchings. Munkres-Kuhn only yields the valid matching for one particular set of sentences. One global ILP can be written down by enumerating all possible matchings in the constraints but it will have a completely unrealistic runtime. Instead, we have to rely on search-based algorithms and compute summaries close to the upperbound. We search for such an approximate solution by employing a meta-heuristic solver introduced recently for extractive MDS by Peyrard and Eckle-Kohler (2016a). Specifically, we use the tool published with their paper.2 Their meta2https://github.com/UKPLab/ coling2016-genetic-swarm-MDS heuristic solver implements a Genetic Algorithm (GA) to create and iteratively optimize summaries over time. In this implementation, the individuals of the population are the candidate solutions which are valid extractive summaries. Valid means that the summary meets the length constraint. Each summary is represented by a binary vector indicating for each sentence in the source document whether it is included in the summary or not. The size of the population is a hyper-parameter that we set to 100. Two evolutionary operators are applied: the mutation and the reproduction. The mutation happens to several randomly chosen summaries by randomly removing one of its sentences and adding a new one that does not violate the length constraint. The reproduction is performed by randomly extracting a valid summary from the union of sentences of randomly selected parent summaries. Both operators are controlled by hyperparameters which we set to their default values. In our scenario, the fitness function is the AP metric, which takes a summary S as input and outputs its AP score. S is converted into a list of propositions pS by looking-up the propositions of each sentence in S from a pre-computed hashmap. For all sentences in the document collection D, the hash-map stores the corresponding propositions. Then the Munkres-Kuhn algorithm is applied to pS and ppyr in order to find matching propositions, and finally the scores of their corresponding SCUs are used to evaluate the fitness of the summary. The runtime might become an issue, because the similarity computation between propositions via ADW is slow. However, all the necessary information is present in the similarity matrix A defined by: Aij = ADW(pD i , ppyr j ) (2) Here Aij is the semantic similarity between the proposition pD i from the source document i and the proposition pP j from the Pyramid set j. A has dimensions m × n if m is the number of propositions in the document collection and n the number of propositions in the Pyramid set. We keep the runtime low by pre-computing the similarity matrix A. With a population of 100 summaries in the GA, the algorithm converges in less than a minute to high scoring summaries, which we can expect to be close to the real upper-bound. 1086 3.2 Supervised Setup to Learn an Approximation of AP We denote the true AP scoring function by π∗. π∗scores summaries by matching the summary propositions to the Pyramid propositions in Ppyr as described before. In this work, we aim to learn a function π, which approximates π∗without having access to Ppyr, but only to the document collection D. Formally, it means that over all document collections D and all summaries S, we look for π which minimizes the following loss: L(π) = X D∈D X S∈S ∥π(D, S) −π∗(Ppyr, S)∥2 (3) This states that the learned π minimizes the squared distance from π∗over the available training data. Model Note that we simply denote π(D, S) by π(S) as it is not ambiguous which document collection is used when S is a summary of D. In order to be able to use an exact and efficient solver like ILP, we constrain π to be a linear function. Therefore, we look for π of the following form: π(S) = X s∈S fθ(s) − X i>j gγ(si ∩sj) (4) Two functions are jointly learned: fθ is a function scoring individual sentences, and gγ is a function scoring the intersection of sentences. θ ∪γ is the set of learned paramaters. We can interpret this learning scenario as jointly learning the sentence importance and the redundancy to get π as close as possible to the true AP π∗. fθ represents the notion of importance learned in the context of AP, while gγ contains notions of coherence and redundancy by scoring sentence intersections. This scenario is intuitive and inspired by previous work on summarization (McDonald, 2007). Now, we explain how to learn these two functions while enforcing π to be linear. Suppose each sentence is represented by a feature set φ and each sentence intersections is represented by φ∩, then the set of features for a summary S is: Φ(S) = { [ s∈S φ(s) ∪ [ i>j φ∩(si ∩sj)} (5) It is clear that the number of features is variable and depends on the number m of sentences in S. In order to deal with a variable number of sentences as input, one could use recurrent neural networks, but at the cost of loosing linearity. Instead, to keep the linearity and to cope with variable sized inputs, we employ linear models for both fθ and gγ: π(S) = X s∈S θ · φ(s) − X i>j γ · φ∩(si ∩sj) (6) By leveraging the properties of linear models we end-up with the following formulation: π(S) = θ · X s∈S φ(s) −γ · X i≥j φ∩(si ∩sj) (7) Because of the linear models, we can sum features over sentences and over sentence intersections to obtain a fixed size feature set: Φ P (S) = {φ P (S) ∪φ P ∩(S)} (8) where we introduced the following notations: φ P (S) = P s∈S φ(s) φ P ∩(S) = P i>j φ(si ∩sj) Suppose φ is composed of k features and φ∩of n features. Then φ P (S) is a vector of dimension k, and similarly φ P ∩(S) is of dimension n. Finally, ΦP is a fixed size feature set of dimension k + n. The function π as defined in equation 6 is still linear with respect to sentence and sentence intersection features, which is convenient for the subsequent summary extraction stage. Features While any feature set for sentences φ and for sentence intersections φ∩could be used, we focused on simple ones in this work. For a sentence s, φ(s) consists of the following features: • Sentence length in number of words. • Sentence position as an integer number starting from 0. • Word overlap with title: Jaccard similarity between the unigrams in the title t and a sentence s: Jaccard(s, t) = |t ∩s| |t ∪s| (9) • Sum of frequency of unigrams and bigrams in the sentence. 1087 • Sum of TF*IDF of unigrams and bigrams in the sentence. The idf of unigrams and bigrams is trained on a background corpus of DBpedia articles.3 • Centrality of the sentence computed via PageRank: A similarity matrix is built between sentences in the document collection based on their TF*IDF vector similarity. Then a power method is applied on the similarity matrix to get PageRank scores of individual sentences. It is similar to the classic LexRank algorithm (Erkan and Radev, 2004). • Propositions centrality: We also use the centrality feature for propositions. Each sentence is scored by the sum of the centrality of its propositions. As PEAK is based on propositions, we expect proposition-level features to provide a useful signal. Finally, φ∩(si ∩sj) consists of the unigram, bigram and trigram overlap between the two sentences si and sj. Training The model is trained with a standard linear least squares regression using pairs of (Φ(S), π∗(S)) as training examples. Because our approach relies on an automatic metric, an arbitrarily large number of summaries and their corresponding scores can be generated. In contrast, getting manual Pyramid annotations for a large number of summaries would be expensive and timeconsuming. As training examples we take the population of scored summaries created by the same GA we use for computing upper-bound summaries. It is important to note that this GA is also a perfect generator of training instances: the summaries in its population are already scored because the fitness function is the AP metric. Indeed, for each topic, an arbitrarily large amount of scored summaries can be generated by adjusting the size of the population. Moreover, the summaries in the population are very diverse and have a wide range of scores, from almost upper-bound to completely random. Optimization-based Summary Extraction Since the function π is constrained to be linear, we can extract the best scoring summary by solving an ILP. 3http://wiki.dbpedia.org/ nif-abstract-datasets Let x be a binary vector indicating whether sentence i is in the summary or not. Similarly, let α be a binary matrix indicating whether both sentence i and j are in the summary. Finally, let K be the length constraint. With these notations, the best summary is extracted by solving the follwogin ILP: argmax S P si∈S xi∗θ·φ(si)−P i≥j αi,j ∗γ ·φ∩(si∩sj) m P i=1 xi ∗len(si) ≤K ∀(i, j), αi,j −xi ≤0 ∀(i, j), αi,j −xj ≤0 ∀(i, j), xi + xj −αi,j ≤1 Which is the ILP directly corresponding to maximizing π as defined by equation 6. Note that · is the dot product while ∗is the scalar multiplication in R. 4 Experiments 4.1 Setup Dataset We perform our experiments on a multidocument summarization dataset from the Text Analysis Conference (TAC) shared task in 2009, TAC-2009.4 TAC-2009 contains 44 topics, each consisting of 10 news articles to be summarized in a maximum of 100 words. In our experiments, we use only the so-called initial summaries (A summaries), but not the update summaries. For each topic, there are 4 human reference summaries and a manually created Pyramid set. As described in section 2, we pre-processed these Pyramid sets with clausIE in order to make them compatible with PEAK. Metrics We primarily evaluate our system via automatic Pyramid scoring from PEAK, after preprocessing the summaries with clausIE. PEAK has a parameter t which is the minimal similarity value required for matching a summary proposition and a Pyramid proposition. We use two different values: t = 0.6 (AP-60) and t = 0.7 (AP-70). For completeness, we also report the ROUGE scores identified by Owczarzak et al. (2012a) as strongly correlating with human evaluation methods: ROUGE-1 (R-1) and ROUGE-2 (R-2) recall with stemming and stopwords not removed. Finally, we perform significance testing with ttest to compare differences between two means.5 4http://tac.nist.gov/2009/ Summarization/ 5The symbol * indicates that the difference compared to 1088 4.2 Automatic Evalution Upper-bound Comparison We compute the set of upper-bound summaries for both ROUGE-2 (RUB) and for AP (AP-UB).6 Both sets of upperbound summaries are evaluated with ROUGE and AP, and the results are reported in Table 1. R-1 R-2 AP-60 AP-70 R-UB 0.4722* 0.2062* 0.5088 0.3074 AP-UB 0.3598 0.1057 0.5789* 0.3790* Table 1: Upper bound comparison between ROUGE and Automatic Pyramid (AP). Interestingly, we observe significant differences between the two upper-bounds. While it is obvious that each set of upper-bound summaries reaches the best score on the metric it maximizes, the same summary set scores much worse when evaluated with the other metric. This observation empirically confirms that the two metrics measure different properties of system summaries. Moreover, the upper-bound for AP gives us information about the room for improvement that summarization systems have with respect to AP. This is relevant in the next paragraph, where we compare systems in an end-to-end evaluation. End-to-end Evaluation We evaluate the quality of the summaries extracted by the summarizer π −ILP in a standard end-to-end evaluation scenario. π −ILP is the system composed of the learned function π and the ILP defined in the previous section. Learning π Using our GA data generation method, we produce 100 scored summaries for each of the 44 topics in TAC2009 while computing the upper-bound. We use the threshold value of 0.65 as a compromise between AP-60 and AP70. The data generated have scores ranging from 0. to 0.4627 with an average of 0.1615. The data is well distributed because the standard deviation is 0.1449. A highly diverse set of summaries is produced, because on average two summaries in the training set only have 1.5% sentences in common, and most of the sentences of the source documents are contained in at least one summary. The model is then trained in a leave-one-out cross-validation setup. The parameters θ and γ are the previous best baseline is significant with p ≤0.05. 6We use the parameter t = 0.6 during the upper-bound computation of AP-UB. R-1 R-2 AP-60 AP-70 TF*IDF 0.3251 0.0626 0.2857 0.1053 LexRank 0.3539 0.0900 0.3969 0.1854 ICSI 0.3670 0.1030 0.3520 0.1568 JS-Gen 0.3381 0.0868 0.3745 0.1463 π-ILP 0.3498 0.0867 0.4402* 0.2109* Table 2: End-to-end evaluation of our approach on TAC-2009. trained on all topics but one. The trained model is used to extract a high-scoring summary on the remaining topic by solving the ILP defined above. Our framework is compared to the following baselines: TF*IDF weighting A simple heuristic introduced by Luhn (1958) where each sentence receives a score from the TF*IDF of its terms. The best sentences are greedily extracted until the length constraint is met. We use the implementation available in the sumy package.7 LexRank (Erkan and Radev, 2004) is a popular graph-based approach. A similarity graph G(V, E) is constructed where V is the set of sentences and an edge eij is drawn between sentences vi and vj if and only if the cosine similarity between them is above a given threshold. Sentences are scored according to their PageRank score in G. It is also available in the sumy package. ICSI (Gillick and Favre, 2009) is a recent system that has been identified as one of the stateof-the-art systems by Hong et al. (2014). It is an ILP framework that extracts a summary by solving a maximum coverage problem considering the most frequent bigrams in the source documents. We use the Python implementation released by Boudin et al. (2015). JS-Gen (Peyrard and Eckle-Kohler, 2016a) is a recent approach which uses a GA to minimize the Jensen-Shannon (JS) divergence between the extracted summary and the source documents. JS divergence measures the difference between probability distributions of words in the source documents and in the summary. Results We report the performance of π −ILP in comparison to the baselines in Table 2. The results confirm an expected behavior. Our supervised framework which aims at approximating and maximizing AP, easily and significantly outperforms all the other baselines when evaluated 7https://github.com/miso-belica/sumy 1089 with AP for both values of the threshold. While the system is not designed with ROUGE in mind, it still performs reasonably well in the ROUGE evaluation, even though it does not outperform previous works. In general, the two metrics ROUGE and AP do not produce the same rankings of systems. This is another piece of empirical evidence that they measure different properties of summaries. When we compare the system performances to the upper-bound scores reported in Table 1, we see that there is still a large room for improvements. We take a closer look at this performance gap in the next paragraph where we evaluate the learning component of our approach. Evaluation of Learned π In this paragraph, we evaluate the learning of π as an approximation of π∗. We do so by measuring the correlation between π and the true AP π∗. We report three correlation metrics to evaluate and compare the ranking of summaries induced by π and π∗: Pearson’s r, Spearman’s ρ and NDCG. Pearson’s r is a value correlation metric which depicts linear relationship between the scores produced by two ranking lists. Spearman’s ρ is a rank correlation metric which compares the ordering of systems induced by the two ranking lists. NDCG is a metric from information retrieval which compares ranked lists and puts a special emphasis on the top elements by applying logarithm decay weighting for elements further down in the list. Intuitively, it describes how well the π function is able to recognize the best scoring summaries. In our case, it is particularly desirable to have a high NDCG score, because the optimizer extracts summaries with high π scores; we want to confirm that top scoring summaries are also among top scoring summaries according to the true π∗. For comparison, we report how well our baselines correlate with π∗. For this, we consider the scoring function for summaries which is part of all our baselines, and which they explicitly or implicitly optimize: TF*IDF greedily maximizes fTF∗IDF , the sum of the frequency of the words in the summary. ICSI maximizes the sum of the document frequency of bigrams (fICSI). LexRank maximizes fLexRank, the sum of the PageRank of sentences in the summary, and fJS is the JS divergence between the summary and the source docuPearson’s r Spearman’s ρ NDCG fT F ∗IDF 0.1246 0.0765 0.8869 fLexRank 0.1733 0.0879 0.8774 fICSI 0.3742 0.3295 0.8520 fJS 0.4074 0.3833 0.8803 π 0.4929* 0.4667* 0.9429* Table 3: Performance of the supervised learning of π on TAC-2009 in a leave-one-out crossvalidation. ments optimized by JS-Gen. For our supervised learning of π, the training procedure is the same as described in the previous section. The correlation scores are averaged over topics and reported in Table 3. We observe that π is able to approximate AP significantly better than any baseline for all metrics. This explains why optimizing π with ILP outperforms the baseline systems in the end-to-end evaluation (Table 2). The learned π achieves a high NDCG, indicating that optimizing π produces summaries very likely to have high π∗scores. This means that π is capable of accurately identifying high-scoring summaries, which again explains the strong performance of π −ILP. The fact that the overall correlations are lower for every system shows that it is difficult to predict π for poor and average quality summaries. It is interesting to observe that features such as unigram and bigram frequency, which are known to be strong features to approximate ROUGE, are less useful to approximate the more complex AP. Feature Weights The advantage of linear models is their interpretability. One can investigate the contribution of each feature by looking at its corresponding weight learned during training. The sign of the weight indicates whether the feature correlates positively or negatively with the results, and its amplitude determines the importance of this feature in the final estimation. We observe that the most useful feature is the proposition centrality, which confirms our expectation that proposition-based features are useful for approximating PEAK. The bigram coverage has also a high weight explaining the strong performance of ICSI. The least useful feature is the sentence position, even if it still contains some useful signal. Interestingly, the analysis of features from the 1090 Pearson’s r Spearman’s ρ NDCG ROUGE −1 0.3292 0.3187 0.7195 ROUGE −2 0.3292 0.2936 0.7259 Table 4: Correlation between ROUGE-1 and ROUGE-2 with AP on the automatically generated training data for TAC-2009. sentence intersection reveals a slightly positive correlation for the unigram and bigram overlap, but a negative correlation for trigram overlap. Our interpretation is that the model learns that good summaries tend to have repeated unigrams and bigrams to ensure some coherence, while the repeated trigrams are more indicative of undesired redundancy. Agreement between ROUGE and AP In the previous paragraphs, we already saw that different metrics produce different rankings of systems. We want to investigate this further and understand to what extent ROUGE and AP disagree. To that end, we use the summaries automatically generated by the genetic algorithm during the upperbound computation. Remember that for each topic of TAC-2009 it produces 100 summaries with a wide range of AP scores. We then score these summaries with both ROUGE-1 and ROUGE-2 and compare how ROUGE metrics correlate with AP. In order to get a meaningful picture, we use the same three correlation metrics as above: Pearson’s, Spearman’s ρ and NDCG. The results are presented in Table 4. We observe a low correlation between ROUGE metrics and AP in terms of both rank correlation (Spearman’s ρ) and value correlation (Pearson’s r). Even though the NDCG numbers are better, the correlation is also relatively low given that higher numbers are usually expected for NDCG (also observed in Table 3). This analysis confirms the initial claim that ROUGE and AP behave quite differently and measure different aspects of summary quality. Therefore, we believe systems developed and trained for AP are worth studying because they necessarily capture different aspects of summarization. 5 Related Work We discuss (i) related work in extractive summarization where an approximation of an automatic evaluation metric was optimized, and (ii) work related to AP specifically. As ROUGE is the metric predominantly used for evaluation of extractive summarization, there are several previous optimization-based approaches which included an approximation of ROUGE in the objective function to maximize. For example, Takamura and Okumura (2010) and Sipos et al. (2012) performed structured output learning (using pairs of summaries and their ROUGE scores available in benchmark datasets as training examples) and thereby learned to maximize the ROUGE scores of the system summaries. Peyrard and Eckle-Kohler (2016b) on the other hand, learned an approximation of ROUGE scores for individual sentences in a supervised setup, and subsequently employed these estimated sentence scores in an ILP formulation to extract summaries. There is also recent work on considering fully automatic evaluation metrics (not relying on human reference summaries), such as the JS divergence as optimization objective. Peyrard and Eckle-Kohler (2016a) used metaheuristics to minimize JS divergence in a multi-document summarization approach and showed that the resulting extractive summaries also scored competitively using ROUGE. Regarding AP, there is not much prior work apart from the papers where the different variants of AP have been presented (Harnly et al., 2005; Passonneau et al., 2013; Yang et al., 2016). Especially, there is no prior work in optimization-based extractive summarization which has developed an approximation of AP and used it in an objective function. However, AP as an evaluation metric is becoming ever more important in the context of abstractive summarization, a research topic which has been gaining momentum in the last few years. For example Li (2015) and Bing et al. (2015) use an earlier version of AP based on distributional semantics (Passonneau et al., 2013) to evaluate abstractive multi-document summarization. 6 Discussion and Future Work We presented a supervised framework that learns automatic Pyramid scores and uses them for optimization-based summary extraction. Using the TAC-2009 multi-document summarization dataset, we performed an upper-bound analysis for AP, and we evaluated the summaries extracted with our framework in an end-to-end evaluation 1091 using automatic evaluation metrics. We observed that the summaries extracted with our framework achieve significantly better AP scores than several strong baselines, but compared to the upper-bound for AP, there is still a large room for improvement. We show that AP and ROUGE catch different aspects of summary quality, but further work would be needed in order to substantiate the claim that AP is indeed better than ROUGE. One way of doing so would be to perform a human evaluation of high-scoring summaries according to ROUGE and AP. In general, ROUGE1 and ROUGE-2 were considered as the baselines for validating the performance of AP because these variants strongly correlate with human evaluation methods (Owczarzak et al., 2012a,b). However, the comparison could be repeated with ROUGE-3, ROUGE-4 and ROUGE-BE, which have been found to predict manual Pyramid better than ROUGE-1 and ROUGE-2 (Rankel et al., 2013). More generally, we see two main directions for future research: (i) the more specific question on how to improve the approximation of AP and (ii) the general need for more research on AP. There are several possible ways how to improve the approximation of AP. First, more semanticallyoriented features could be developed, e.g., features based on propositions rather than sentences or n-grams, or word embedding features encoding a large amount of distributional semantic knowledge (Mikolov et al., 2013). Second, the linearity constraint we used for efficiency reasons could be relaxed. Modeling AP as a non-linear function will presumably enhance the approximation. For the extraction of summaries based on a nonlinear function, greedy algorithms or search-based strategies could be used, e.g., the GA we used in this work for the upper-bound computation. We see a general need for more research on AP, because the way AP measures the quality aspect of content selection is not only more meaningful than ROUGE, but also applicable to the growing field of abstractive summarization. An important direction would be the improvement of AP itself, both in terms of methods used to compute AP, and in terms of tools: while the current off-the-shelf system PEAK is a promising start, it is very slow and therefore difficult to apply in practice. In this context, we would like to stress that our GA-based method to create training data for learning a model of AP can easily be adapted to any automatic scoring metric, and specifically to other or future AP variants. Finally, we hope to encourage the community to move away from ROUGE and instead consider AP as the main summary evaluation metric. This would be especially interesting for optimizationbased approaches, since the quality of the summaries created by such approaches depends on the quality of the underlying scoring metric. 7 Conclusion We presented the first work on AP in optimizationbased extractive summarization. We computed an upper-bound for AP and developed a supervised framework which learns an approximation of AP based on automatically generated training instances. We could access a large number of high-quality training data by using the population of a genetic algorithm. Our end-to-end evaluation showed that of our framework significantly outperforms strong baselines on the AP metric, but also revealed a large room for improvement in comparison to the upper-bound, which motivates future work on developing systems with better performance on the semantically motivated AP metric. Acknowledgments This work has been supported by the German Research Foundation (DFG) as part of the Research Training Group “Adaptive Preparation of Information from Heterogeneous Sources” (AIPHES) under grant No. GRK 1994/1, and via the GermanIsraeli Project Cooperation (DIP, grant No. GU 798/17-1). References Lidong Bing, Piji Li, Yi Liao, Wai Lam, Weiwei Guo, and Rebecca Passonneau. 2015. Abstractive MultiDocument Summarization via Phrase Selection and Merging. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 1587–1597. Florian Boudin, Hugo Mougard, and Benoit Favre. 2015. Concept-based Summarization using Integer Linear Programming: From Concept Pruning to Multiple Optimal Solutions. In Proceedings of 1092 the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 1914– 1918. Luciano Del Corro and Rainer Gemulla. 2013. ClausIE: Clause-based Open Information Extraction. In Proceedings of the 22Nd International Conference on World Wide Web. ACM, Rio de Janeiro, Brazil, pages 355–366. G¨unes Erkan and Dragomir R. Radev. 2004. LexRank: Graph-based Lexical Centrality As Salience in Text Summarization. Journal of Artificial Intelligence Research pages 457–479. Dan Gillick and Benoit Favre. 2009. A Scalable Global Model for Summarization. In Proceedings of the Workshop on Integer Linear Programming for Natural Language Processing. Association for Computational Linguistics, Boulder, Colorado, pages 10–18. Aaron Harnly, Rebecca Passonneau, and Owen Rambow. 2005. Automation of Summary Evaluation by the Pyramid Method. In Proceedings of the International Conference Recent Advances in Natural Language Processing (RANLP). Borovets, Bulgaria, pages 226–232. Kai Hong, John Conroy, benoit Favre, Alex Kulesza, Hui Lin, and Ani Nenkova. 2014. A Repository of State of the Art and Competitive Baseline Summaries for Generic News Summarization. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14). Reykjavik, Iceland, pages 1608–1616. Wei Li. 2015. Abstractive Multi-document Summarization with Semantic Information Extraction. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 1908–1913. Chin-Yew Lin. 2004. ROUGE: A Package for Automatic Evaluation of Summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 Workshop. Association for Computational Linguistics, Barcelona, Spain, pages 74–81. Hans Peter Luhn. 1958. The Automatic Creation of Literature Abstracts. IBM Journal of Research Development 2:159–165. Ryan McDonald. 2007. A study of global inference algorithms in multi-document summarization. In Proceedings of the 29th European Conference on IR Research. Springer-Verlag, Rome, Italy, ECIR’07, pages 557–564. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed Representations of Words and Phrases and their Compositionality. In Advances in Neural Information Processing Systems 26, Curran Associates, Inc., pages 3111– 3119. Ani Nenkova, Rebecca Passonneau, and Kathleen McKeown. 2007. The Pyramid Method: Incorporating Human Content Selection Variation in Summarization Evaluation. ACM Transactions on Speech and Language Processing (TSLP) 4(2). Karolina Owczarzak, John M. Conroy, Hoa Trang Dang, and Ani Nenkova. 2012a. An Assessment of the Accuracy of Automatic Evaluation in Summarization. In Proceedings of Workshop on Evaluation Metrics and System Comparison for Automatic Summarization. Association for Computational Linguistics, Montr´eal, Canada, pages 1–9. Karolina Owczarzak, Peter A. Rankel, Hoa Trang Dang, and John M. Conroy. 2012b. Assessing the Effect of Inconsistent Assessors on Summarization Evaluation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Jeju Island, Korea, pages 359–362. Rebecca Passonneau, Emily Chen, Weiwei Guo, and Dolores Perin. 2013. Automated Pyramid Scoring of Summaries using Distributional Semantics. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Sofia, Bulgaria, pages 143–147. Maxime Peyrard and Judith Eckle-Kohler. 2016a. A General Optimization Framework for MultiDocument Summarization Using Genetic Algorithms and Swarm Intelligence. In Proceedings of the 26th International Conference on Computational Linguistics (COLING 2016). The COLING 2016 Organizing Committee, Osaka, Japan, pages 247 – 257. Maxime Peyrard and Judith Eckle-Kohler. 2016b. Optimizing an Approximation of ROUGE - a ProblemReduction Approach to Extractive Multi-Document Summarization. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1825–1836. Mohammad Taher Pilehvar, David Jurgens, and Roberto Navigli. 2013. Align, Disambiguate and Walk: A Unified Approach for Measuring Semantic Similarity. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Sofia, Bulgaria, pages 1341–1351. Peter A. Rankel, John M. Conroy, Hoa Trang Dang, and Ani Nenkova. 2013. A Decade of Automatic Content Evaluation of News Summaries: Reassessing the State of the Art. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Sofia, Bulgaria, pages 131–136. Ruben Sipos, Pannaga Shivaswamy, and Thorsten Joachims. 2012. Large-margin Learning of Submodular Summarization Models. In Proceedings 1093 of the 13th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, Avignon, France, pages 224–233. Hiroya Takamura and Manabu Okumura. 2010. Learning to Generate Summary as Structured Output. In Proceedings of the 19th ACM international Conference on Information and Knowledge Management. Association for Computing Machinery, Toronto , ON, Canada, pages 1437–1440. Qian Yang, Rebecca Passonneau, and Gerard de Melo. 2016. PEAK: Pyramid Evaluation via Automated Knowledge Extraction. In Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI 2016). AAAI Press, Phoenix, AZ, USA. 1094
2017
100
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1095–1104 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1101 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1095–1104 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1101 Selective Encoding for Abstractive Sentence Summarization Qingyu Zhou†∗Nan Yang‡ Furu Wei‡ Ming Zhou‡ †Harbin Institute of Technology, Harbin, China ‡Microsoft Research, Beijing, China [email protected] {nanya,fuwei,mingzhou}@microsoft.com Abstract We propose a selective encoding model to extend the sequence-to-sequence framework for abstractive sentence summarization. It consists of a sentence encoder, a selective gate network, and an attention equipped decoder. The sentence encoder and decoder are built with recurrent neural networks. The selective gate network constructs a second level sentence representation by controlling the information flow from encoder to decoder. The second level representation is tailored for sentence summarization task, which leads to better performance. We evaluate our model on the English Gigaword, DUC 2004 and MSR abstractive sentence summarization datasets. The experimental results show that the proposed selective encoding model outperforms the state-ofthe-art baseline models. 1 Introduction Sentence summarization aims to shorten a given sentence and produce a brief summary of it. This is different from document level summarization task since it is hard to apply existing techniques in extractive methods, such as extracting sentence level features and ranking sentences. Early works propose using rule-based methods (Zajic et al., 2007), syntactic tree pruning methods (Knight and Marcu, 2002), statistical machine translation techniques (Banko et al., 2000) and so on for this task. We focus on abstractive sentence summarization task in this paper. Recently, neural network models have been applied in this task. Rush et al. (2015) use autoconstructed sentence-headline pairs to train a neu∗Contribution during internship at Microsoft Research. ral network summarization model. They use a Convolutional Neural Network (CNN) encoder and feed-forward neural network language model decoder for this task. Chopra et al. (2016) extend their work by replacing the decoder with Recurrent Neural Network (RNN). Nallapati et al. (2016) follow this line and change the encoder to RNN to make it a full RNN based sequence-tosequence model (Sutskever et al., 2014). the sri lankan government on wednesday announced the closure of government schools with immediate effect as a military campaign against tamil separatists escalated in the north of the country . sri lanka closes schools as war escalates Figure 1: An abstractive sentence summarization system may produce the output summary by distilling the salient information from the highlight to generate a fluent sentence. We model the distilling process with selective encoding. All the above works fall into the encodingdecoding paradigm, which first encodes the input sentence to an abstract representation and then decodes the intended output sentence based on the encoded information. As an extension of the encoding-decoding framework, attentionbased approach (Bahdanau et al., 2015) has been broadly used: the encoder produces a list of vectors for all tokens in the input, and the decoder uses an attention mechanism to dynamically extract encoded information and align with the output tokens. This approach achieves huge success in tasks like machine translation, where alignment between all parts of the input and output are required. However, in abstractive sentence summarization, there is no explicit alignment relationship between the input sentence and the summary ex1095 cept for the extracted common words. The challenge here is not to infer the alignment, but to select the highlights while filtering out secondary information in the input. A desired work-flow for abstractive sentence summarization is encoding, selection, and decoding. After selecting the important information from an encoded sentence, the decoder produces the output summary using the selected information. For example, in Figure 1, given the input sentence, the summarization system first selects the important information, and then rephrases or paraphrases to produce a well-organized summary. Although this is implicitly modeled in the encoding-decoding framework, we argue that abstractive sentence summarization shall benefit from explicitly modeling this selection process. In this paper we propose Selective Encoding for Abstractive Sentence Summarization (SEASS). We treat the sentence summarization as a threephase task: encoding, selection, and decoding. It consists of a sentence encoder, a selective gate network, and a summary decoder. First, the sentence encoder reads the input words through an RNN unit to construct the first level sentence representation. Then the selective gate network selects the encoded information to construct the second level sentence representation. The selective mechanism controls the information flow from encoder to decoder by applying a gate network according to the sentence information, which helps improve encoding effectiveness and release the burden of the decoder. Finally, the attention-equipped decoder generates the summary using the second level sentence representation. We conduct experiments on English Gigaword, DUC 2004 and Microsoft Research Abstractive Text Compression test sets. Our SEASS model achieves 17.54 ROUGE-2 F1, 9.56 ROUGE-2 recall and 10.63 ROUGE-2 F1 on these test sets respectively, which improves performance compared to the state-of-the-art methods. 2 Related Work Abstractive sentence summarization, also known as sentence compression and similar to headline generation, is used to help compress or fuse the selected sentences in extractive document summarization systems since they may inadvertently include unnecessary information. The sentence summarization task has been long connected to the headline generation task. There are some previous methods to solve this task, such as the linguistic rule-based method (Dorr et al., 2003). As for the statistical machine learning based methods, Banko et al. (2000) apply statistical machine translation techniques by modeling headline generation as a translation task and use 8000 article-headline pairs to train the system. Rush et al. (2015) propose leveraging news data in Annotated English Gigaword (Napoles et al., 2012) corpus to construct large scale parallel data for sentence summarization task. They propose an ABS model, which consists of an attentive Convolutional Neural Network encoder and an neural network language model (Bengio et al., 2003) decoder. On this Gigaword test set and DUC 2004 test set, the ABS model produces the state-of-theart results. Chopra et al. (2016) extend this work, which keeps the CNN encoder but replaces the decoder with recurrent neural networks. Their experiments showes that the CNN encoder with RNN decoder model performs better than Rush et al. (2015). Nallapati et al. (2016) further change the encoder to an RNN encoder, which leads to a full RNN sequence-to-sequence model. Besides, they enrich the encoder with lexical and statistic features which play important roles in traditional feature based summarization systems, such as NER and POS tags, to improve performance. Experiments on the Gigaword and DUC 2004 test sets show that the above models achieve state-of-theart results. Gu et al. (2016) and Gulcehre et al. (2016) come up similar ideas that summarization task can benefit from copying words from input sentences. Gu et al. (2016) propose CopyNet to model the copying action in response generation, which also applies for summarization task. Gulcehre et al. (2016) propose a switch gate to control whether to copy from source or generate from decoder vocabulary. Zeng et al. (2016) also propose using copy mechanism and add a scalar weight on the gate of GRU/LSTM for this task. Cheng and Lapata (2016) use an RNN based encoder-decoder for extractive summarization of documents. Yu et al. (2016) propose a segment to segment neural transduction model for sequence-tosequence framework. The model introduces a latent segmentation which determines correspondences between tokens of the input sequence and the output sequence. Experiments on this task show that the proposed transduction model per1096 forms comparable to the ABS model. Shen et al. (2016) propose to apply Minimum Risk Training (MRT) in neural machine translation to directly optimize the evaluation metrics. Ayana et al. (2016) apply MRT on abstractive sentence summarization task and the results show that optimizing for ROUGE improves the test performance. 3 Problem Formulation For sentence summarization, given an input sentence x = (x1, x2, . . . , xn), where n is the sentence length, xi ∈Vs and Vs is the source vocabulary, the system summarizes x by producing y = (y1, y2, . . . , yl), where l ≤n is the summary length , yi ∈Vt and Vt is the target vocabulary. If |y| ⊆|x|, which means all words in summary y must appear in given input, we denote this as extractive sentence summarization. If |y| ⊈|x|, which means not all words in summary come from input sentence, we denote this as abstractive sentence summarization. Table 1 provides an example. We focus on abstracive sentence summarization task in this paper. Input: South Korean President Kim Young-Sam left here Wednesday on a week - long state visit to Russia and Uzbekistan for talks on North Korea ’s nuclear confrontation and ways to strengthen bilateral ties . Output: Kim leaves for Russia for talks on NKorea nuclear standoff Table 1: An abstractive sentence summarization example. 4 Model As shown in Figure 2, our model consists of a sentence encoder using the Gated Recurrent Unit (GRU) (Cho et al., 2014), a selective gate network and an attention-equipped GRU decoder. First, the bidirectional GRU encoder reads the input words x = (x1, x2, . . . , xn) and builds its representation (h1, h2, . . . , hn). Then the selective gate selects and filters the word representations according to the sentence meaning representation to produce a tailored sentence word representation for abstractive sentence summarization task. Lastly, the GRU decoder produces the output summary with attention to the tailored representation. In the following sections, we introduce the sentence encoder, the selective mechanism, and the summary decoder respectively. 4.1 Sentence Encoder The role of the sentence encoder is to read the input sentence and construct the basic sentence representation. Here we employ a bidirectional GRU (BiGRU) as the recurrent unit, where GRU is defined as: zi = σ(Wz[xi, hi−1]) ri = σ(Wr[xi, hi−1]) ehi = tanh(Wh[xi, ri ⊙hi−1]) hi = (1 −zi) ⊙hi−1 + zi ⊙ehi (1) (2) (3) (4) where Wz, Wr and Wh are weight matrices. The BiGRU consists of a forward GRU and a backward GRU. The forward GRU reads the input sentence word embeddings from left to right and gets a sequence of hidden states, (⃗h1,⃗h2, . . . ,⃗hn). The backward GRU reads the input sentence embeddings reversely, from right to left, and results in another sequence of hidden states, ( ⃗ h1, ⃗ h2, . . . , ⃗ hn): ⃗hi = GRU(xi,⃗hi−1) ⃗ hi = GRU(xi, ⃗ hi+1) (5) (6) The initial states of the BiGRU are set to zero vectors, i.e., ⃗h1 = 0 and ⃗ hn = 0. After reading the sentence, the forward and backward hidden states are concatenated, i.e., hi = [⃗hi; ⃗ hi], to get the basic sentence representation. 4.2 Selective Mechanism In the sequence-to-sequence machine translation (MT) model, the encoder and decoder are responsible for mapping input sentence information to a list of vectors and decoding the sentence representation vectors to generate an output sentence (Bahdanau et al., 2015). Some previous works apply this framework to summarization generation tasks (Nallapati et al., 2016; Gu et al., 2016; Gulcehre et al., 2016). However, abstractive sentence summarization is different from MT in two ways. First, there is no explicit alignment relationship between the input sentence and the output summary except for the common words. Second, summarization task needs to keep the highlights and remove the unnecessary information, while MT needs to keep all information literally. Herein, we propose a selective mechanism to model the selection process for abstractive sentence summarization. The selective mechanism 1097 𝑥1 ℎ1 𝑥2 ℎ2 𝑥3 ℎ3 𝑥4 ℎ4 𝑥5 ℎ5 𝑥6 ℎ6 ℎ1′ ℎ2′ ℎ3′ ℎ4′ ℎ5′ ℎ6′ ℎ𝑖 𝑠 MLP GRU Attention 𝑐𝑡−1 𝑠𝑡−1 𝑦𝑡−1 𝑠𝑡 𝑐𝑡 maxout 𝑦𝑡 Selective Gate Network Encoder Decoder softmax Figure 2: Overview of the Selective Encoding for Abstractive Sentence Summarization (SEASS). extends the sequence-to-sequence model by constructing a tailored representation for abstractive sentence summarization task. Concretely, the selective gate network in our model takes two vector inputs, the sentence word vector hi and the sentence representation vector s. The sentence word vector hi is the output of the BiGRU encoder and represents the meaning and context information of word xi. The sentence vector s is used to represent the meaning of the sentence. For each word xi, the selective gate network generates a gate vector sGatei using hi and s, then the tailored representation is constructed, i.e., h′ i. In detail, we concatenate the last forward hidden state ⃗hn and backward hidden state ⃗ h1 as the sentence representation s: s = " ⃗ h1 ⃗hn # (7) For each time step i, the selective gate takes the sentence representation s and BiGRU hidden hi as inputs to compute the gate vector sGatei: sGatei = σ(Wshi + Uss + b) h′ i = hi ⊙sGatei (8) (9) where Ws and Us are weight matrices, b is the bias vector, σ denotes sigmoid activation function, and ⊙is element-wise multiplication. After the selective gate network, we obtain another sequence of vectors (h′ 1, h′ 2, . . . , h′ n). This new sequence is then used as the input sentence representation for the decoder to generate the summary. 4.3 Summary Decoder On top of the sentence encoder and the selective gate network, we use GRU with attention as the decoder to produce the output summary. At each decoding time step t, the GRU reads the previous word embedding wt−1 and previous context vector ct−1 as inputs to compute the new hidden state st. To initialize the GRU hidden state, we use a linear layer with the last backward encoder hidden state ⃗ h1 as input: st = GRU(wt−1, ct−1, st−1) s0 = tanh(Wd ⃗ h1 + b) (10) (11) where Wd is the weight matrix and b is the bias vector. The context vector ct for current time step t is computed through the concatenate attention mechanism (Luong et al., 2015), which matches the current decoder state st with each encoder hidden state h′ i to get an importance score. The importance scores are then normalized to get the current context vector by weighted sum: et,i = v⊤ a tanh(Wast−1 + Uah′ i) αt,i = exp(et,i) Pn i=1 exp(et,i) ct = n X i=1 αt,ih′ i (12) (13) (14) We then combine the previous word embedding wt−1, the current context vector ct, and the decoder state st to construct the readout state rt. The readout state is then passed through a maxout hidden layer (Goodfellow et al., 2013) to predict the 1098 next word with a softmax layer over the decoder vocabulary. rt = Wrwt−1 + Urct + Vrst mt = [max{rt,2j−1, rt,2j}]⊤ j=1,...,d p(yt|y1, . . . , yt−1) = softmax(Womt) (15) (16) (17) where Wa, Ua, Wr, Ur, Vr and Wo are weight matrices. Readout state rt is a 2d-dimensional vector, and the maxout layer (Equation 16) picks the max value for every two numbers in rt and produces a d-dimensional vector mt. 4.4 Objective Function Our goal is to maximize the output summary probability given the input sentence. Therefore, we optimize the negative log-likelihood loss function: J(θ) = −1 |D| X (x,y)∈D log p(y|x) (18) where D denotes a set of parallel sentencesummary pairs and θ is the model parameter. We use Stochastic Gradient Descent (SGD) with minibatch to learn the model parameter θ. 5 Experiments In this section we introduce the dataset we use, the evaluation metric, the implementation details, the baselines we compare to, and the performance of our system. 5.1 Dataset Training Set For our training set, we use a parallel corpus which is constructed from the Annotated English Gigaword dataset (Napoles et al., 2012) as mentioned in Rush et al. (2015). The parallel corpus is produced by pairing the first sentence and the headline in the news article with some heuristic rules. We use the script1 released by Rush et al. (2015) to pre-process and extract the training and development datasets. The script performs various basic text normalization, including PTB tokenization, lower-casing, replacing all digit characters with #, and replacing word types seen less than 5 times with ⟨unk⟩. The extracted corpus contains about 3.8M sentence-summary pairs for the training set and 189K examples for the development set. For our test set, we use the English Gigaword, DUC 2004, and Microsoft Research Abstractive Text Compression test sets. 1https://github.com/facebook/NAMAS English Gigaword Test Set We randomly sample 8000 pairs from the extracted development set as our development set since it is relatively large. For the test set, we use the same randomly heldout test set of 2000 sentence-summary pairs as Rush et al. (2015).2 We also find that except for the empty titles, this test set has some invalid lines like the input sentence containing only one word. Therefore, we further sample 2000 pairs as our internal test set and release it for future works3. DUC 2004 Test Set We employ DUC 2004 data for tasks 1 & 2 (Over et al., 2007) in our experiments as one of the test sets since it is too small to train a neural network model on. The dataset pairs each document with 4 different human-written reference summaries which are capped at 75 bytes. It has 500 input sentences with each sentence paired with 4 summaries. MSR-ATC Test Set Toutanova et al. (2016) release a new dataset for sentence summarization task by crowdsourcing. This dataset contains approximately 6,000 source text sentences with multiple manually-created summaries (about 26,000 sentence-summary pairs in total). Toutanova et al. (2016) provide a standard split of the data into training, development, and test sets, with 4,936, 448 and 785 input sentences respectively. Since the training set is too small, we only use the test set as one of our test sets. We denote this dataset as MSR-ATC (Microsoft Research Abstractive Text Compression) test set in the following. Table 2 summarizes the statistic information of the three datasets we used. 5.2 Evaluation Metric We employ ROUGE (Lin, 2004) as our evaluation metric. ROUGE measures the quality of summary by computing overlapping lexical units, such as unigram, bigram, trigram, and longest common subsequence (LCS). It becomes the standard evaluation metric for DUC shared tasks and popular for summarization evaluation. Following previous work, we use ROUGE-1 (unigram), ROUGE-2 (bi2Thanks to Rush et al. (2015), we acquired the test set they used. Following Chopra et al. (2016), we remove pairs with empty titles resulting in slightly different accuracy compared to Rush et al. (2015) for their systems. The cleaned test set contains 1951 sentence-summary pairs. 3Our development and test sets can be found at https: //res.qyzhou.me 1099 Data Set Giga DUC† MSR† #(sent) 3.99M 500 785 #(sentWord) 125M 17.8K 29K #(summWord) 33M 20.9K 85.9K #(ref) 1 4 3-5 AvgInputLen 31.35 35.56 36.97 AvgSummLen 8.23 10.43 25.5 Table 2: Data statistics for the English Gigaword, DUC 2004 and MSR-ATC datasets. #(x) denotes the number of x, e.g., #(ref) is the number of reference summaries of an input sentence. AvgInputLen is the average input sentence length and AvgSummLen is the average summary length. †DUC 2004 and MSR-ATC datasets are for test purpose only. gram) and ROUGE-L (LCS) as the evaluation metrics in the reported experimental results. 5.3 Implementation Details Model Parameters The input and output vocabularies are collected from the training data, which have 119,504 and 68,883 word types respectively. We set the word embedding size to 300 and all GRU hidden state sizes to 512. We use dropout (Srivastava et al., 2014) with probability p = 0.5. Model Training We initialize model parameters randomly using a Gaussian distribution with Xavier scheme (Glorot and Bengio, 2010). We use Adam (Kingma and Ba, 2015) as our optimizing algorithm. For the hyperparameters of Adam optimizer, we set the learning rate α = 0.001, two momentum parameters β1 = 0.9 and β2 = 0.999 respectively, and ϵ = 10−8. During training, we test the model performance (ROUGE-2 F1) on development set for every 2,000 batches. We halve the Adam learning rate α if the ROUGE-2 F1 score drops for twelve consecutive tests on development set. We also apply gradient clipping (Pascanu et al., 2013) with range [−5, 5] during training. To both speed up the training and converge quickly, we use mini-batch size 64 by grid search. Beam Search We use beam search to generate multiple summary candidates to get better results. To avoid favoring shorter outputs, we average the ranking score along the beam path by dividing it by the number of generated words. To both decode fast and get better results, we set the beam size to 12 in our experiments. 5.4 Baseline We compare SEASS model with the following state-of-the-art baselines: ABS Rush et al. (2015) use an attentive CNN encoder and NNLM decoder to do the sentence summarization task. We trained this baseline model with the released code1 and evaluate it with our internal English Gigaword test set and MSR-ATC test set. ABS+ Based on ABS model, Rush et al. (2015) further tune their model using DUC 2003 dataset, which leads to improvements on DUC 2004 test set. CAs2s As an extension of the ABS model, Chopra et al. (2016) use a convolutional attention-based encoder and RNN decoder, which outperforms the ABS model. Feats2s Nallapati et al. (2016) use a full RNN sequence-to-sequence encoder-decoder model and add some features to enhance the encoder, such as POS tag, NER, and so on. Luong-NMT Neural machine translation model of Luong et al. (2015) with two-layer LSTMs for the encoder-decoder with 500 hidden units in each layer implemented in (Chopra et al., 2016). s2s+att We also implement a sequence-tosequence model with attention as our baseline and denote it as “s2s+att”. 5.5 Results We report ROUGE F1, ROUGE recall and ROUGE F1 for English Gigaword, DUC 2004 and MSRATC test sets respectively. We use the official ROUGE script (version 1.5.5) 4 to evaluate the summarization quality in our experiments. For English Gigaword5 and MSR-ATC6 test sets, the outputs have different lengths so we evaluate the system with F1 metric. As for the DUC 2004 test set7, the task requires the system to produce a fixed length summary (75 bytes), therefore we employ ROUGE recall as the evaluation metric. To satisfy the length requirement, we decode the output summary to a roughly expected length following Rush et al. (2015). 4http://www.berouge.com/ 5The ROUGE evaluation option is the same as Rush et al. (2015), -m -n 2 -w 1.2 6The ROUGE evaluation option is, -m -n 2 -w 1.2 7The ROUGE evaluation option is, -m -b 75 -n 2 -w 1.2 1100 English Gigaword We acquire the test set from Rush et al. (2015) so we can make fair comparisons to the baselines. Models RG-1 RG-2 RG-L ABS (beam)‡ 29.5511.3226.42ABS+ (beam)‡ 29.7611.8826.96Feats2s (beam)‡ 32.6715.5930.64CAs2s (greedy)‡ 33.1014.4530.25CAs2s (beam)‡ 33.7815.9731.15Luong-NMT (beam)‡ 33.1014.4530.71s2s+att (greedy) 33.1814.7930.80s2s+att (beam) 34.0415.9531.68SEASS (greedy) 35.48 16.50 32.93 SEASS (beam) 36.15 17.54 33.63 Table 3: Full length ROUGE F1 evaluation results on the English Gigaword test set used by Rush et al. (2015). RG in the Table denotes ROUGE. Results with ‡ mark are taken from the corresponding papers. The superscript - indicates that our SEASS model with beam search performs significantly better than it as given by the 95% confidence interval in the official ROUGE script. Models RG-1 RG-2 RG-L ABS (beam) 37.4115.8734.70s2s+att (greedy) 42.4120.7639.84s2s+att (beam) 43.7622.2841.14SEASS (greedy) 45.27 22.88 42.20 SEASS (beam) 46.86 24.58 43.53 Table 4: Full length ROUGE F1 evaluation on our internal English Gigaword test data. The superscript - indicates that our SEASS model performs significantly better than it as given by the 95% confidence interval in the official ROUGE script. In Table 3, we report the ROUGE F1 score of our model and the baseline methods. Our SEASS model with beam search outperforms all baseline models by a large margin. Even for greedy search, our model still performs better than other methods which used beam search. For the popular ROUGE2 metric, our SEASS model achieves 17.54 F1 score and performs better than the previous works. Compared to the ABS model, our model has a 6.22 ROUGE-2 F1 relative gain. Compared to the highest CAs2s baseline, our model achieves 1.57 ROUGE-2 F1 improvement and passes the significant test according to the official ROUGE script. Table 4 summarizes our results on our internal test set using ROUGE F1 evaluation metrics. The performance on our internal test set is comparable to our development set, which achieves 24.58 ROUGE-2 F1 and outperforms the baselines. DUC 2004 We evaluate our model using the ROUGE recall score since the reference summaries of the DUC 2004 test set are capped at 75 bytes. Therefore, we decode the summary to a fixed length 18 to ensure that the generated summary satisfies the minimum length requirement. As summarized in Table 5, our SEASS outperforms all the baseline methods and achieves 29.21, 9.56 and 25.51 for ROUGE 1, 2 and L recall. Compared to the ABS+ model which is tuned using DUC 2003 data, our model performs significantly better by 1.07 ROUGE-2 recall score and is trained only with English Gigaword sentence-summary data without being tuned using DUC data. Models RG-1 RG-2 RG-L ABS (beam)‡ 26.557.0622.05ABS+ (beam)‡ 28.188.4923.81Feats2s (beam)‡ 28.359.46 24.59CAs2s (greedy)‡ 29.13 7.6223.92CAs2s (beam)‡ 28.97 8.2624.06Luong-NMT (beam)‡ 28.55 8.7924.43s2s+att (greedy) 27.037.8923.80s2s+att (beam) 28.13 9.25 24.76 SEASS (greedy) 28.68 8.55 25.04 SEASS (beam) 29.21 9.56 25.51 Table 5: ROUGE recall evaluation results on DUC 2004 test set. All these models are tested using beam search. Results with ‡ mark are taken from the corresponding papers. The superscript - indicates that our SEASS model performs significantly better than it as given by the 95% confidence interval in the official ROUGE script. MSR-ATC We report the full length ROUGE F1 score on the MSR-ATC test set in Table 6. To the best of our knowledge, this is the first work that reports ROUGE metric scores on the MSR-ATC dataset. Note that we only compare our model with ABS since the others are not publicly available. Our SEASS achieves 10.63 ROUGE-2 F1 and outperforms the s2s+att baseline by 1.02 points. 1101 the council of europe 's human rights commissioner slammed thursday as `` unacceptable '' conditions in france 's overcrowded and dilapidated jails , where some ## inmates have committed suicide this year . Figure 3: First derivative heat map of the output with respect to the selective gate. The important words are selected in the input sentence, such as “europe”, “slammed” and “unacceptable”. The output summary of our system is “council of europe slams french prison conditions” and the true summary is “council of europe again slams french prison conditions”. Models RG-1 RG-2 RG-L ABS (beam) 20.275.2617.10s2s+att (greedy) 15.154.4813.62s2s+att (beam) 22.659.6121.39SEASS (greedy) 19.77 6.44 17.36 SEASS (beam) 25.75 10.63 22.90 Table 6: Full length ROUGE F1 evaluation on MSR-ATC test set. Beam search are used in both the baselines and our method. The superscript indicates that our SEASS model performs significantly better than it as given by the 95% confidence interval in the official ROUGE script. 6 Discussion In this section, we first compare the performance of SEASS with the s2s+att baseline model to illustrate that the proposed method succeeds in selecting information and building tailored representation for abstractive sentence summarization. We then analyze selective encoding by visualizing the heat map. Effectiveness of Selective Encoding We further test the SEASS model with different sentence lengths on English Gigaword test sets, which are merged from the Rush et al. (2015) test set and our internal test set. The length of sentences in the test sets ranges from 10 to 80. We group the sentences with an interval of 4 and get 18 different groups and we draw the first 14 groups. We find that the performance curve of our SEASS model always appears to be on the top of that of s2s+att with a certain margin. For the groups of 16, 20, 24, 32, 56 and 60, the SEASS model obtains big improvements compared to the s2s+att model. Overall, these improvements on all groups indicate that the selective encoding method benefits the abstractive sentence summarization task. 10 20 30 40 50 60 Input Sentence Length 0 5 10 15 20 25 30 ROUGE-2 F1 Scores SEASS s2s+att Figure 4: ROUGE-2 F1 score on different groups of input sentences in terms of their length for s2s+att baseline and our SEASS model on English Gigaword test sets. Saliency Heat Map of Selective Gate Since the output of the selective gate network is a high dimensional vector, it is hard to visualize all the gate values. We use the method in Li et al. (2016) to visualize the contribution of the selective gate to the final output, which can be approximated by the first derivative. Given sentence words x with associated output summary y, the trained model associates the pair (x, y) with a score Sy(x). The goal is to decide which gate g associated with a specific word makes the most significant contribution to Sy(x). We approximate the Sy(g) by computing the first-order Taylor expansion since the score Sy(x) is a highly non-linear function in the deep neural network models: Sy(g) ≈w(g)T g + b (19) where w(g) is first the derivative of Sy with respect to the gate g: w(g) = ∂(Sy) ∂g |g (20) 1102 We then draw the Euclidean norm of the first derivative of the output y with respect to the selective gate g associated with each input words. Figure 3 shows an example of the first derivative heat map, in which most of the important words are selected by the selective gate such as “europe”, “slammed”, “unacceptable”, “conditions”, and “france”. We can observe that the selective gate determines the importance of each word before decoder, which releases the burden of it by providing tailored sentence encoding. 7 Conclusion This paper proposes a selective encoding model which extends the sequence-to-sequence model for abstractive sentence summarization task. The selective mechanism mimics one of the human summarizers’ behaviors, selecting important information before writing down the summary. With the proposed selective mechanism, we build an end-to-end neural network summarization model which consists of three phases: encoding, selection, and decoding. Experimental results show that the selective encoding model greatly improves the performance with respect to the state-of-theart methods on English Gigaword, DUC 2004 and MSR-ATC test sets. Acknowledgments We thank Chuanqi Tan, Junwei Bao, Shuangzhi Wu and the anonymous reviewers for their helpful comments. We also thank Alexander M. Rush for providing the dataset for comparison and helpful discussions. References Ayana, Shiqi Shen, Zhiyuan Liu, and Maosong Sun. 2016. Neural headline generation with minimum risk training. CoRR abs/1604.01904. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of 3rd International Conference for Learning Representations. San Diego. Michele Banko, Vibhu O Mittal, and Michael J Witbrock. 2000. Headline generation based on statistical translation. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics, pages 318–325. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. journal of machine learning research 3(Feb):1137–1155. Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 484–494. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Doha, Qatar, pages 1724–1734. Sumit Chopra, Michael Auli, and Alexander M. Rush. 2016. Abstractive sentence summarization with attentive recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California, pages 93–98. Bonnie Dorr, David Zajic, and Richard Schwartz. 2003. Hedge trimmer: A parse-and-trim approach to headline generation. In Proceedings of the HLT-NAACL 03 on Text summarization workshop-Volume 5. Association for Computational Linguistics, pages 1–8. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Aistats. volume 9, pages 249–256. Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron C Courville, and Yoshua Bengio. 2013. Maxout networks. ICML (3) 28:1319–1327. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1631–1640. Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 140–149. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of 3rd International Conference for Learning Representations. San Diego. 1103 Kevin Knight and Daniel Marcu. 2002. Summarization beyond sentence extraction: A probabilistic approach to sentence compression. Artificial Intelligence 139(1):91–107. Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016. Visualizing and understanding neural models in nlp. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California, pages 681–691. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out: Proceedings of the ACL-04 workshop. Barcelona, Spain, volume 8. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 1412– 1421. Ramesh Nallapati, Bowen Zhou, C¸ a glar Gulc¸ehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning. Courtney Napoles, Matthew Gormley, and Benjamin Van Durme. 2012. Annotated gigaword. In Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction. Association for Computational Linguistics, Stroudsburg, PA, USA, AKBC-WEKEX ’12, pages 95–100. Paul Over, Hoa Dang, and Donna Harman. 2007. Duc in context. Information Processing & Management 43(6):1506–1520. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. ICML (3) 28:1310–1318. Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 379–389. Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum risk training for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1683–1692. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15(1):1929–1958. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems. pages 3104–3112. Kristina Toutanova, Chris Brockett, Ke M. Tran, and Saleema Amershi. 2016. A dataset and evaluation metrics for abstractive compression of sentences and short paragraphs. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 340–350. Lei Yu, Jan Buys, and Phil Blunsom. 2016. Online segment to segment neural transduction. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 1307–1316. David Zajic, Bonnie J Dorr, Jimmy Lin, and Richard Schwartz. 2007. Multi-candidate reduction: Sentence compression as a tool for document summarization tasks. Information Processing & Management 43(6):1549–1570. Wenyuan Zeng, Wenjie Luo, Sanja Fidler, and Raquel Urtasun. 2016. Efficient summarization with read-again and copy mechanism. arXiv preprint arXiv:1611.03382 . 1104
2017
101
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1105–1115 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1102 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1105–1115 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1102 PositionRank: An Unsupervised Approach to Keyphrase Extraction from Scholarly Documents Corina Florescu and Cornelia Caragea Computer Science and Engineering University of North Texas, USA [email protected], [email protected] Abstract The large and growing amounts of online scholarly data present both challenges and opportunities to enhance knowledge discovery. One such challenge is to automatically extract a small set of keyphrases from a document that can accurately describe the document’s content and can facilitate fast information processing. In this paper, we propose PositionRank, an unsupervised model for keyphrase extraction from scholarly documents that incorporates information from all positions of a word’s occurrences into a biased PageRank. Our model obtains remarkable improvements in performance over PageRank models that do not take into account word positions as well as over strong baselines for this task. Specifically, on several datasets of research papers, PositionRank achieves improvements as high as 29.09%. 1 Introduction The current Scholarly Web contains many millions of scientific documents. For example, Google Scholar is estimated to have more than 100 million documents. On one hand, these rapidly-growing scholarly document collections offer benefits for knowledge discovery, and on the other hand, finding useful information has become very challenging. Keyphrases associated with a document typically provide a high-level topic description of the document and can allow for efficient information processing. In addition, keyphrases are shown to be rich sources of information in many natural language processing and information retrieval tasks such as scientific paper summarization, classification, recommendation, clustering, and search (Abu-Jbara and Radev, 2011; Qazvinian et al., 2010; Jones and Staveley, 1999; Zha, 2002; Zhang et al., 2004; Hammouda et al., 2005). Due to their importance, many approaches to keyphrase extraction have been proposed in the literature along two lines of research: supervised and unsupervised (Hasan and Ng, 2014, 2010). In the supervised line of research, keyphrase extraction is formulated as a binary classification problem, where candidate phrases are classified as either positive (i.e., keyphrases) or negative (i.e., non-keyphrases) (Frank et al., 1999; Hulth, 2003). Various feature sets and classification algorithms yield different extraction systems. For example, Frank et al. (1999) developed a system that extracts two features for each candidate phrase, i.e., the tf-idf of the phrase and its distance from the beginning of the target document, and uses them as input to Na¨ıve Bayes classifiers. Although supervised approaches typically perform better than unsupervised approaches (Kim et al., 2013), the requirement for large human-annotated corpora for each field of study has led to significant attention towards the design of unsupervised approaches. In the unsupervised line of research, keyphrase extraction is formulated as a ranking problem with graph-based ranking techniques being considered state-of-the-art (Hasan and Ng, 2014). These graph-based techniques construct a word graph from each target document, such that nodes correspond to words and edges correspond to word association patterns. Nodes are then ranked using graph centrality measures such as PageRank (Mihalcea and Tarau, 2004; Liu et al., 2010) or HITS (Litvak and Last, 2008), and the top ranked phrases are returned as keyphrases. Since their introduction, many graph-based extensions have been proposed, which aim at modeling various types of information. For example, Wan and Xiao (2008) proposed a model that incorporates a local 1105 Factorizing Personalized Markov Chains for Next-Basket Recommendation by Steffen Rendle, Christoph Freudenthaler and Lars Schmidt-Thieme Recommender systems are an important component of many websites. Two of the most popular approaches are based on matrix factorization (MF) and Markov chains (MC). MF methods learn the general taste of a user by factorizing the matrix over observed user-item preferences. [...] we present a method bringing both approaches together. Our method is based on personalized transition graphs over underlying Markov chains. [...] our factorized personalized MC (FPMC) model subsumes both a common Markov chain and the normal matrix factorization model. [...] we introduce an adaption of the Bayesian Personalized Ranking (BPR) framework for sequential basket data. [...] Author-input keyphrases: Basket Recommendation, Markov Chain, Matrix Factorization Figure 1: The title and abstract of a WWW paper by Rendle et al. (2010) and the author-input keyphrases for the paper. Red bold phrases represent the gold-standard keyphrases for the document. neighborhood of the target document corresponding to its textually-similar documents, computed using the cosine similarity between the tf-idf vectors of documents. Liu et al. (2010) assumed a mixture of topics over documents and proposed to use topic models to decompose these topics in order to select keyphrases from all major topics. Keyphrases are then ranked by aggregating the topic-specific scores obtained from several topicbiased PageRanks. We posit that other information can be leveraged that has the potential to improve unsupervised keyphrase extraction. For example, in a scholarly domain, keyphrases generally occur on positions very close to the beginning of a document and occur frequently. Figure 1 shows an anecdotal example illustrating this behavior using the 2010 best paper award winner in the World Wide Web conference. The author input keyphrases are marked with red bold in the figure. Notice in this example the high frequency of the keyphrase “Markov chain” that occurs very early in the document (even from its title). Hence, can we design an effective unsupervised approach to keyphrase extraction by jointly exploiting words’ position information and their frequency in documents? We specifically address this question using research papers as a case study. The result of this extraction task will aid indexing of documents in digital libraries, and hence, will lead to improved organization, search, retrieval, and recommendation of scientific documents. The importance of keyphrase extraction from research papers is also emphasized by the SemEval Shared Tasks on this topic from 20171 and 2010 (Kim et al., 2010). Our contributions are as follows: 1http://alt.qcri.org/semeval2017/task10/ • We propose an unsupervised graph-based model, called PositionRank, that incorporates information from all positions of a word’s occurrences into a biased PageRank to score keywords that are later used to score and rank keyphrases in research papers. • We show that PositionRank that aggregates information from all positions of a word’s occurrences performs better than a model that uses only the first position of a word. • We experimentally evaluate PositionRank on three datasets of research papers and show statistically significant improvements over PageRank-based models that do not take into account word positions, as well as over strong baselines for keyphrase extraction. The rest of the paper is organized as follows. We summarize related work in the next section. PositionRank is described in Section 3. We then present the datasets of research papers, and our experiments and results in Section 4. Finally, we conclude the paper in Section 5. 2 Related Work Many supervised and unsupervised approaches to keyphrase extraction have been proposed in the literature (Hasan and Ng, 2014). Supervised approaches use annotated documents with “correct” keyphrases to train classifiers for discriminating keyphrases from nonkeyphrases for a document. KEA (Frank et al., 1999) and GenEx (Turney, 2000) are two representative supervised approaches with the most important features being the frequency and the position of a phrase in a target document. Hulth 1106 (2003) used a combination of lexical and syntactic features such as the collection frequency and the part-of-speech tag of a phrase in conjunction with a bagging technique. Nguyen and Kan (2007) extended KEA to include features such as the distribution of candidate phrases in different sections of a research paper, and the acronym status of a phrase. In a different work, Medelyan et al. (2009) extended KEA to integrate information from Wikipedia. Lopez and Romary (2010) used bagged decision trees learned from a combination of features including structural features (e.g., the presence of a phrase in particular sections of a document) and lexical features (e.g., the presence of a candidate phrase in WordNet or Wikipedia). Chuang et al. (2012) proposed a model that incorporates a set of statistical and linguistic features (e.g., tf-idf, BM25, part-of-speech filters) for identifying descriptive terms in a text. Caragea et al. (2014a) designed features based on information available in a document network (such as a citation network) and used them with traditional features in a supervised framework. In unsupervised approaches, various measures such as tf-idf and topic proportions are used to score words, which are later aggregated to obtain scores for phrases (Barker and Cornacchia, 2000; Zhang et al., 2007; Liu et al., 2009). The ranking based on tf-idf has been shown to work well in practice (Hasan and Ng, 2014, 2010), despite its simplicity. Graph-based ranking methods and centrality measures are considered stateof-the-art for unsupervised keyphrase extraction. Mihalcea and Tarau (2004) proposed TextRank for scoring keyphrases by applying PageRank on a word graph built from adjacent words within a document. Wan and Xiao (2008) extended TextRank to SingleRank by adding weighted edges between words that co-occur in a window of variable size w ≥2. Textually-similar neighboring documents are included in ExpandRank (Wan and Xiao, 2008) to compute more accurate word cooccurrence information. Gollapalli and Caragea (2014) extended ExpandRank to integrate information from citation networks where papers cite one another. Lahiri et al. (2014) extracted keyphrases from documents using various centrality measures such as node degree, clustering coefficient and closeness. Martinez-Romo et al. (2016) used information from WordNet to enrich the semantic relationships between the words in the graph. Several unsupervised approaches leverage word clustering techniques such as first grouping candidate words into topics and then, extracting one representative keyphrase from each topic (Liu et al., 2009; Bougouin et al., 2013). Liu et al. (2010) extended topic-biased PageRank (Haveliwala, 2003) to kephrase extraction. In particular, they decomposed a document into multiple topics, using topic models, and applied a separate topicbiased PageRank for each topic. The PageRank scores from each topic were then combined into a single score, using as weights the topic proportions returned by topic models for the document. The best performing keyphrase extraction system in SemEval 2010 (El-Beltagy and Rafea, 2010) used statistical observations such as term frequencies to filter out phrases that are unlikely to be keyphrases. More precisely, thresholding on the frequency of phrases is applied, where the thresholds are estimated from the data. The candidate phrases are then ranked using the tf-idf model in conjunction with a boosting factor which aims at reducing the bias towards single word terms. Danesh et al. (2015) computed an initial weight for each phrase based on a combination of statistical heuristics such as the tf-idf score and the first position of a phrase in a document. Phrases and their initial weights are then incorporated into a graph-based algorithm which produces the final ranking of keyphrase candidates. Le et al. (2016) showed that the extraction of keyphrases from a document can benefit from considering candidate phrases with part of speech tags other than nouns or adjectives. Adar and Datta (2015) extracted keyphrases by mining abbreviations from scientific literature and built a semantically hierarchical keyphrase database. Word embedding vectors were also employed to measure the relatedness between words in graph based models (Wang et al., 2014). Many of the above approaches, both supervised and unsupervised, are compared and analyzed in the ACL survey on keyphrase extraction by Hasan and Ng (2014). In contrast to the above approaches, we propose PositionRank, aimed at capturing both highly frequent words or phrases and their position in a document. Despite that the relative position of a word in a document is shown to be a very effective feature in supervised keyphrase extraction (Hulth, 2003; Zhang et al., 2007), to our knowledge, the position information has not been used before in unsupervised methods. The strong contribution of 1107 this paper is the design of a position-biased PageRank model that successfully incorporates all positions of a word’s occurrences, which is different from supervised models that use only the first position of a word. Our model assigns higher probabilities to words found early on in a document instead of using a uniform distribution over words. 3 Proposed Model In this section, we describe PositionRank, our fully unsupervised, graph-based model, that simultaneously incorporates the position of words and their frequency in a document to compute a biased PageRank score for each candidate word. Graph-based ranking algorithms such as PageRank (Page et al., 1998) measure the importance of a vertex within a graph by taking into account global information computed recursively from the entire graph. For each word, we compute a weight by aggregating information from all positions of the word’s occurrences. This weight is then incorporated into a biased PageRank algorithm in order to assign a different “preference” to each word. 3.1 PositionRank The PositionRank algorithm involves three essential steps: (1) the graph construction at word level; (2) the design of Position-Biased PageRank; and (3) the formation of candidate phrases. These steps are detailed below. 3.1.1 Graph Construction Let d be a target document for extracting keyphrases. We first apply the part-of-speech filter using the NLP Stanford toolkit and then select as candidate words only nouns and adjectives, similar to previous works (Mihalcea and Tarau, 2004; Wan and Xiao, 2008). We build a word graph G = (V, E) for d such that each unique word that passes the part-of-speech filter corresponds to a node in G. Two nodes vi and vj are connected by an edge (vi, vj) ∈E if the words corresponding to these nodes co-occur within a window of w contiguous tokens in the content of d. The weight of an edge (vi, vj) ∈E is computed based on the co-occurrence count of the two words within a window of w successive tokens in d. Note that the graph can be constructed both directed and undirected. However, Mihalcea and Tarau (2004) showed that the type of graph used to represent the text does not significantly influence the performance of keyphrase extraction. Hence, in this work, we build undirected graphs. 3.1.2 Position-Biased PageRank Formally, let G be an undirected graph constructed as above and let M be its adjacency matrix. An element mij ∈M is set to the weight of edge (vi, vj) if there exist an edge between nodes vi and vj, and is set to 0 otherwise. The PageRank score of a node vi is recursively computed by summing the normalized scores of nodes vj, which are linked to vi (as explained below). Let S denote the vector of PageRank scores, for all vi ∈V . The initial values of S are set to 1 |V |. The PageRank score of each node at step t+1, can then be computed recursively using: S(t + 1) = f M · S(t) (1) where f M is the normalized form of matrix M with g mij ∈f M defined as: g mij = ( mij/ P|V | j=1 mij if P|V | j=1 mij ̸= 0 0 otherwise The PageRank computation can be seen as a Markov Chain process in which nodes represent states and the links between them are the transitions. By recursively applying Eq. (1), we obtain the principal eigenvector, which represents the stationary probability distribution of each state, in our case of each node (Manning et al., 2008). To ensure that the PageRank (or the random walk) does not get stuck into cycles of the graph, a damping factor α is added to allow the “teleport” operation to another node in the graph. Hence, the computation of S becomes: S = α · f M · S + (1 −α) · ep (2) where S is the principal eigenvector and ep is a vector of length |V | with all elements 1 |V |. The vector ep indicates that, being in a node vi, the random walk can jump to any other node in the graph with equal probability. By biasing ep, the random walk would prefer nodes that have higher probability in the graph (Haveliwala, 2003). The idea of PositionRank is to assign larger weights (or probabilities) to words that are found early in a document and are frequent. Specifically, we want to assign a higher probability to a word found on the 2nd position as compared to a word 1108 found on the 50th position in the same document. We weigh each candidate word with its inverse position in the document before any filters are applied. If the same word appears multiple times in the target document, then we sum all its position weights. For example, if a word is found on the following positions: 2nd, 5th and 10th, its weight is: 1 2 + 1 5 + 1 10 = 4 5 = 0.8. Summing up the position weights for a given word aims to grant more confidence to frequently occurring words by taking into account the position weight of each occurrence. Then, the vector ep is set to the normalized weights for each candidate word as follows: ep = h p1 p1+p2+...+p|V | , p2 p1+p2+...+p|V | , ..., p|V | p1+p2+...+p|V | i The PageRank score of a vertex vi, i.e., S(vi), can be obtained in an algebraic way by recursively computing the following equation: S(vi) = (1 −α) · epi + α · X vj∈Adj(vi) wji O(vj)S(vj) where O(vj) = P vk∈Adj(vj) wjk and epi is the weight found in the vector ep for vertex vi. In our experiments, the words’ PageRank scores are recursively computed until the difference between two consecutive iterations is less than 0.001 or a number of 100 iterations is reached. 3.1.3 Forming Candidate Phrases Candidate words that have contiguous positions in a document are concatenated into phrases. We consider noun phrases that match the regular expression (adjective)*(noun)+, of length up to three, (i.e., unigrams, bigrams, and trigrams). Finally, phrases are scored by using the sum of scores of individual words that comprise the phrase (Wan and Xiao, 2008). The top-scoring phrases are output as predictions (i.e., the predicted keyphrases for the document). 4 Experiments and Results 4.1 Datasets and Evaluation Metrics In order to evaluate the performance of PositionRank, we carried out experiments on three datasets. The first and second datasets were made available by Gollapalli and Caragea (2014).2 These datasets are compiled from the CiteSeerX digital library (Giles et al., 1998) and consist of 2http://www.cse.unt.edu/∼ccaragea/keyphrases.html research papers from the ACM Conference on Knowledge Discovery and Data Mining (KDD) and the World Wide Web Conference (WWW). The third dataset was made available by Nguyen and Kan (2007) and consist of research papers from various disciplines. In experiments, we use the title and abstract of each paper to extract keyphrases. The author-input keyphrases are used as gold-standard for evaluation. All three datasets are summarized in Table 1, which shows the number of papers in each dataset, the total number of keyphrases (Kp), the average number of keyphrases per document (AvgKp), and a brief insight into the length and number of available keyphrases. Evaluation Metrics. We use mean reciprocal rank (MRR) curves to illustrate our experimental findings. MRR gives the averaged ranking of the first correct prediction and is defined as: MRR = 1 |D| P d∈D 1 rd where D is the collection of documents and rd is the rank at which the first correct keyphrase of document d was found. We also summarize the results in terms of Precision, Recall, and F1score in a table to contrast PositionRank with previous models since these metrics are widely used in previous works (Hulth, 2003; Wan and Xiao, 2008; Mihalcea and Tarau, 2004; Hasan and Ng, 2014). To compute “performance@k” (such as MRR@k), we examine the top-k predictions (with k ranging from 1 to 10). We use average k to refer to the average number of keyphrases for a particular dataset as listed in Table 1. For example, average k = 5 for the WWW dataset. For comparison purposes, we used Porter Stemmer to reduce both predicted and gold keyphrases to a base form. 4.2 Results and Discussion Our experiments are organized around several questions, which are discussed below. How sensitive is PositionRank to its parameters? One parameter of our model that can influence its performance is the window size w, which determines how edges are added between candidate words in the graph. We experimented with values of w ranging from 2 to 10 in steps of 1 and chose several configurations for illustration. Figure 2 shows the MRR curves of PositionRank for different values of w, on all three datasets. As can be seen from the figure, the performance of our model does not change significantly as w changes. 1109 Dataset #Docs Kp AvgKp unigrams bigrams trigrams n-grams (n ≥4) KDD 834 3093 3.70 810 1770 471 42 WWW 1350 6405 4.74 2254 3139 931 81 Nguyen 211 882 4.18 260 457 132 33 Table 1: A summary of our datasets. Figure 2: MRR curves for PositionRank that uses different values for the window size. In addition to the window size, our model has one more parameter, i.e., the damping factor α. In order to understand its influence on the performance of PositionRank, we experimented with several values of α, e.g., 0.75, 0.8, 0.85, 0.9, and did not find significant differences in the performance of PositionRank (results not shown due to highly overlapping curves). Hence, in Equation 2, we set α = 0.85 as in (Haveliwala, 2003). What is the impact of aggregating information from all positions of a word over using a word’s first position only? In this experiment, we analyze the influence that position-weighted frequent words in a document would have on the performance of PositionRank. Specifically, we compare the performance of the model that aggregates information from all positions of a word’s occurrences, referred as PositionRank - full model with that of the model that uses only the first position of a word, referred as PositionRank - fp. In the example from the previous section, a word occurring on positions 2nd, 5th, and 10th will have a weight of 1 2 + 1 5 + 1 10 = 4 5 = 0.8 in the full model, and a weight of 1 2 = 0.5 in the first position (fp) model. Note that the weights of words are normalized before they are used in the biased PageRank. Figure 3 shows the results of this experiment in terms of MRR for the top k predicted keyphrases, with k from 1 to 10, for all datasets, KDD, WWW, and Nguyen. As we can see from the figure, the performance of PositionRank - full model consistently outperforms its counterpart that uses the first position only, on all datasets. We can conclude from this experiment that aggregating information from all occurrences of a word acts as an important component in PositionRank. Hence, we use PositionRank - full model for further comparisons. How well does position information aid in unsupervised keyphrase extraction from research papers? In this experiment, we compare our position-biased PageRank model (PositionRank) with two PageRank-based models, TextRank and SingleRank, that do not make use of the position information. In TextRank, an undirected graph is built for each target paper, so that nodes correspond to words and edges are drawn between two words that occur next to each other in text, i.e., the window size w is 2. SingleRank extends TextRank by adding edges between two words that co-occur in a window of w ≥2 contiguous words in text. Figure 4 shows the MRR curves comparing PositionRank with TextRank and SingleRank. As can be seen from the figure, PositionRank substantially outperforms both TextRank and SingleRank on all three datasets, illustrating that the words’ positions contain significant hints that aid the keyphrase extraction task. PositionRank can successfully harness this information in an unsupervised setting to obtain good improvements in the extraction performance. For example, PositionRank that uses information from all positions of a word’s occurrences yields improvements in MRR@average k of 17.46% for KDD, 20.18% for WWW, and 17.03% for Nguyen over SingleRank. How does PositionRank compare with other existing state-of-the-art methods? In Figure 5, we 1110 Figure 3: The comparison of PositionRank that aggregates information from all positions of a word’s occurrences (full model) with the PositionRank that uses only the first position of a word (fp). Figure 4: MRR curves for PositionRank and two unbiased PageRank-based models that do not consider position information. compare PositionRank with several strong baselines: TF-IDF, ExpandRank, and TopicalPageRank (TPR) (Hasan and Ng, 2014; Wan and Xiao, 2008; Liu et al., 2010). We selected these baselines based on the ACL survey on keyphrase extraction by Hasan and Ng (2014). In TF-IDF, we calculate the tf score of each candidate word in the target document, whereas the idf component is estimated from all three datasets. In ExpandRank, we build an undirected graph from each paper and its local textual neighborhood and calculate the candidate words’ importance scores using PageRank. We performed experiments with various numbers of textually-similar neighbors and present the best results for each dataset. In TPR, we build an undirected graph using information from the target paper. We then perform topic decomposition of the target document using topic models to infer the topic distribution of a document and to compute the probability of words in these topics. Last, we calculate the candidate words’ importance scores by aggregating the scores from several topic-biased PageRanks (one PageRank per topic). We used the implementation of topic models from Mallet.3 To train the topic 3http://mallet.cs.umass.edu/ model, we used a subset of about 45, 000 paper abstracts extracted from the CiteSeerx scholarly big dataset introduced by Caragea et al. (2014b). For all models, the score of a phrase is obtained by summing the score of the constituent words in the phrase. From Figure 5, we can see that PositionRank achieves a significant increase in MRR over the baselines, on all datasets. For example, the highest relative improvement in MRR@average k for this experiment is as high as 29.09% achieved on the Nguyen collection. Among all models compared in Figure 5, ExpandRank is clearly the best performing baseline, while TPR achieves the lowest MRR values, on all datasets. 4.3 Overall Performance As already mentioned, prior works on keyphrase extraction report results also in terms of precision (P), recall (R), and F1-score (F1) (Hulth, 2003; Hasan and Ng, 2010; Liu et al., 2010; Wan and Xiao, 2008). Consistent with these works, in Table 2, we show the results of the comparison of PositionRank with all baselines, in terms of P, R and F1 for top k = 2, 4, 6, 8 predicted keyphrases, on all three datasets. As can be seen from the ta1111 Figure 5: MRR curves for PositionRank and baselines on the three datasets. Dataset Unsupervised Top2 Top4 Top6 Top8 method P% R% F1% P% R% F1% P% R% F1% P% R% F1% KDD PositionRank 11.1 5.6 7.3 10.8 11.1 10.6 9.8 15.3 11.6 9.2 18.9 12.1 PositionRank-fp 10.3 5.3 6.8 10.2 10.4 10.0 9.1 13.8 10.9 8.6 17.2 11.3 TF-IDF 10.5 5.2 6.8 9.6 9.7 9.4 9.2 13.8 10.7 8.7 17.4 11.3 TextRank 8.1 4.0 5.3 8.3 8.5 8.1 8.1 12.3 9.4 7.6 15.3 9.8 SingleRank 9.1 4.6 6.0 9.3 9.4 9.0 8.7 13.1 10.1 8.1 16.4 10.6 ExpandRank 10.3 5.5 6.9 10.4 10.7 10.1 9.2 14.5 10.9 8.4 17.5 11.0 TPR 9.3 4.8 6.2 9.1 9.3 8.9 8.8 13.4 10.3 8.0 16.2 10.4 WWW PositionRank 11.3 5.3 7.0 11.3 10.5 10.5 10.8 14.9 12.1 9.9 18.1 12.3 PositionRank-fp 9.6 4.5 6.0 10.3 9.6 9.6 10.1 13.8 11.2 9.4 17.2 11.7 TF-IDF 9.5 4.5 5.9 10.0 9.3 9.3 9.6 13.3 10.7 9.1 16.8 11.4 TextRank 7.7 3.7 4.8 8.6 7.9 8.0 8.1 12.3 9.8 8.2 15.2 10.2 SingleRank 9.1 4.2 5.6 9.6 8.9 8.9 9.3 13.0 10.5 8.8 16.3 11.0 ExpandRank 10.4 5.3 6.7 10.4 10.6 10.1 9.5 14.7 11.2 8.6 17.7 11.2 TPR 8.8 4.2 5.5 9.6 8.9 8.9 9.5 13.2 10.7 9.0 16.5 11.2 Nguyen PositionRank 10.5 5.8 7.3 10.6 11.4 10.7 11.0 17.2 13.0 10.2 21.1 13.5 PositionRank-fp 10.0 5.4 6.8 10.4 11.1 10.5 11.2 17.4 13.2 10.1 21.2 13.3 TF-IDF 7.3 4.0 5.0 9.5 10.3 9.6 9.1 14.4 10.9 8.9 18.9 11.8 TextRank 6.3 3.6 4.5 7.4 7.4 7.2 7.8 11.9 9.1 7.2 14.8 9.4 SingleRank 9.0 5.2 6.4 9.5 9.9 9.4 9.2 14.5 11.0 8.9 18.3 11.6 ExpandRank 9.5 5.3 6.6 9.5 10.2 9.5 9.1 14.4 10.8 8.7 18.3 11.4 TPR 8.7 4.9 6.1 9.1 9.5 9.0 8.8 13.8 10.5 8.8 18.0 11.5 Table 2: PositionRank against baselines in terms of Precision, Recall and F1-score. Best results are shown in bold blue. ble, PositionRank outperforms all baselines, on all datasets. For example, on WWW at top 6 predicted keyphrases, PositionRank achieves an F1score of 12.1% as compared to 11.2% achieved by ExpandRank and 10.7% achieved by both TFIDF and TPR. From the table, we can also see that ExpandRank is generally the best performing baseline on all datasets. However, it is interesting to note that, unlike PositionRank that uses information only from the target paper, ExpandRank adds external information from a textually-similar neighborhood of the target paper, and hence, is computationally more expensive. PositionRank-first position only (fp) typically performs worse than PositionRank-full model, but it still outperforms the baseline methods for most top k predicted keyphrases, on all datasets. For example, on Nguyen at top 4, PositionRank-fp achieves an F1-score of 10.5% compared to the best baseline (TF-IDF in this case), which reaches only a score of 9.6%. A striking observation is that PositionRank outperforms TPR on all datasets. Compared with our model, TPR is a very complex model, which uses topic models to learn topics of words and infer the topic proportion of documents. Additionally, TPR has more parameters (e.g., the number of topics) that need to be tuned separately for each dataset. PositionRank is much less complex, it does not require an additional dataset (e.g., to train a topic model) and its performance is better than that of TPR. TF-IDF and ExpandRank are the best performing baselines, on all datasets, KDD, WWW, and Nguyen. For example, on KDD at k = 4, TF-IDF and ExpandRank yield an F1-score of 9.4% and 10.1%, respectively, compared with 8.4%, 9.0% and 8.9% achieved by TextRank, SingleRank and TPR, respectively. 1112 Geographically0.274 Focused0.134 Collaborative0.142 Crawling0.165 by Weizheng Gao, Hyun Chul Lee and Yingbo Miao A collaborative0.142 crawler0.165 is a group0.025 of crawling0.165 nodes0.033, in which each crawling0.165 node0.033 is responsible0.012 for a specific0.010 portion0.010 of the web0.015. We study the problem0.007 of collecting0.011 geographically0.274 aware0.006 pages0.018 using collaborative0.142 crawling0.165 strategies0.017. We first propose several collaborative0.142 crawling0.165 strategies0.017 for the geographically0.274 focused0.134 crawling0.165, whose goal0.004 is to collect web0.015 pages0.018 about specified0.010 geographic0.274 locations0.003 by considering features0.005 like URL0.006 address0.005 of page0.018 [...] More precisely, features0.005 like URL0.006 address0.005 of page0.018 and extended0.004 anchor0.004 text0.004 of link0.004 are shown to yield the best overall performance0.003 for the geographically0.274 focused0.134 crawling0.165. Author-input keyphrases: collaborative crawling, geographically focused crawling, geographic entities Figure 6: The title and abstract of a WWW paper by Gao et al. (2006) and the author-input keyphrases for the paper. Bold dark red phrases represent predicted keyphrases for the document. With a paired t-test on our results, we found that the improvements in MRR, precision, recall, and F1-score for PositionRank are statistically significant (p-values < 0.05). 4.4 Anecdotal Evidence We show anecdotal evidence using a paper by Gao et al. (2006) that is part of the Nguyen dataset. Figure 6 shows the title and abstract of this paper together with the author-input keyphrases. We marked in bold dark red the candidate phrases that are predicted as keyphrases by our proposed model (PositionRank), in black the words that are selected as candidate phrases and in gray the words that are filtered out based on their part-ofspeech tags or the stopwords list being used. We show the probability (or weight) of each candidate word in its upper right corner. These weights are computed based on both the word’s position and its frequency in the text. Note that our model uses these weights to bias the PageRank algorithm to prefer specific nodes in the graph. As we can see from the figure, component words of author’s keyphrases such as: “collaborative,” “crawling,” “focused,” and “geographically” are assigned the highest scores while candidates such as “performance,” “anchor,” or “features” are assigned very low weights, making them less likely to be chosen as keyphrases. 5 Conclusion and Future Work We proposed a novel unsupervised graph-based algorithm, called PositionRank, which incorporates both the position of words and their frequency in a document into a biased PageRank. To our knowledge, we are the first to integrate the position information in novel ways in unsupervised keyphrase extraction. Specifically, unlike supervised approaches that use only the first position information, we showed that modeling the entire distribution of positions for a word outperforms models that use only the first position. Our experiments on three datasets of research papers show that our proposed model achieves better results than strong baselines, with relative improvements in performance as high as 29.09%. In the future, it would be interesting to explore the performance of PositionRank on other types of documents, e.g., web pages and emails. Acknowledgments We are grateful to Dr. C. Lee Giles for the CiteSeerX data that we used to create our KDD and WWW datasets as well as to train the topic models. We very much thank our anonymous reviewers for their constructive comments and feedback. This research was supported by the NSF award #1423337 to Cornelia Caragea. Any opinions, findings, and conclusions expressed here are those of the authors and do not necessarily reflect the views of NSF. References Amjad Abu-Jbara and Dragomir Radev. 2011. Coherent citation-based summarization of scientific papers. In Proc. of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. pages 500–509. 1113 Eytan Adar and Srayan Datta. 2015. Building a scientific concept hierarchy database (schbase). In Proceedings of the Association for Computational Linguistics. pages 606–615. Ken Barker and Nadia Cornacchia. 2000. Using noun phrase heads to extract document keyphrases. In Advances in Artificial Intelligence. pages 40–52. Adrien Bougouin, Florian Boudin, and B´eatrice Daille. 2013. Topicrank: Graph-based topic ranking for keyphrase extraction. In International Joint Conference on Natural Language Processing (IJCNLP). pages 543–551. Cornelia Caragea, Florin Adrian Bulgarov, Andreea Godea, and Sujatha Das Gollapalli. 2014a. Citationenhanced keyphrase extraction from research papers: A supervised approach. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. pages 1435–1446. Cornelia Caragea, Jian Wu, Alina Maria Ciobanu, Kyle Williams, Juan Pablo Fern´andez Ram´ırez, HungHsuan Chen, Zhaohui Wu, and C. Lee Giles. 2014b. Citeseer x : A scholarly big dataset. In Proceedings of the 36th European Conference on Information Retrieval. pages 311–322. Jason Chuang, Christopher D Manning, and Jeffrey Heer. 2012. Without the clutter of unimportant words: Descriptive keyphrases for text visualization. ACM Transactions on Computer-Human Interaction 19(3):19. Soheil Danesh, Tamara Sumner, and James H Martin. 2015. Sgrank: Combining statistical and graphical methods to improve the state of the art in unsupervised keyphrase extraction. Lexical and Computational Semantics page 117. Samhaa R El-Beltagy and Ahmed Rafea. 2010. Kpminer: Participation in semeval-2. In Proceedings of the 5th international workshop on semantic evaluation. Association for Computational Linguistics, pages 190–193. Eibe Frank, Gordon W. Paynter, Ian H. Witten, Carl Gutwin, and Craig G. Nevill-Manning. 1999. Domain-specific keyphrase extraction. In Proceedings of the 16th International Joint Conference on Artificial Intelligence. pages 668–673. Weizheng Gao, Hyun Chul Lee, and Yingbo Miao. 2006. Geographically focused collaborative crawling. In Proceedings of the 15th international conference on World Wide Web. ACM, pages 287–296. C Lee Giles, Kurt D Bollacker, and Steve Lawrence. 1998. Citeseer: An automatic citation indexing system. In Proceedings of the third ACM conference on Digital libraries. pages 89–98. Sujatha Das Gollapalli and Cornelia Caragea. 2014. Extracting keyphrases from research papers using citation networks. In Proceedings of the 28th American Association for Artificial Intelligence. pages 1629–1635. Khaled M Hammouda, Diego N Matute, and Mohamed S Kamel. 2005. Corephrase: Keyphrase extraction for document clustering. In Machine Learning and Data Mining in Pattern Recognition, Springer, pages 265–274. Kazi Saidul Hasan and Vincent Ng. 2010. Conundrums in unsupervised keyphrase extraction: making sense of the state-of-the-art. In Proceedings of the 23rd International Conference on Computational Linguistics. pages 365–373. Kazi Saidul Hasan and Vincent Ng. 2014. Automatic keyphrase extraction: A survey of the state of the art. In Proceedings of the 27th International Conference on Computational Linguistics. pages 1262–1273. Taher H Haveliwala. 2003. Topic-sensitive pagerank: A context-sensitive ranking algorithm for web search. IEEE transactions on knowledge and data engineering pages 784–796. Anette Hulth. 2003. Improved automatic keyword extraction given more linguistic knowledge. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. pages 216–223. Steve Jones and Mark S. Staveley. 1999. Phrasier: A system for interactive document retrieval using keyphrases. In Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. pages 160–167. Su Nam Kim, Olena Medelyan, Min-Yen Kan, and Timothy Baldwin. 2010. SemEval-2010 Task 5: Automatic Keyphrase Extraction from Scientific Articles. In Proceedings of the 5th International Workshop on Semantic Evaluation. pages 21–26. Su Nam Kim, Olena Medelyan, Min-Yen Kan, and Timothy Baldwin. 2013. Automatic keyphrase extraction from scientific articles. Language Resources and Evaluation, Springer 47(3):723–742. Shibamouli Lahiri, Sagnik Ray Choudhury, and Cornelia Caragea. 2014. Keyword and keyphrase extraction using centrality measures on collocation networks. CoRR abs/1401.6571. Tho Thi Ngoc Le, Minh Le Nguyen, and Akira Shimazu. 2016. Unsupervised keyphrase extraction: Introducing new kinds of words to keyphrases. In Australasian Joint Conference on Artificial Intelligence. Springer, pages 665–671. Marina Litvak and Mark Last. 2008. Graph-based keyword extraction for single-document summarization. In Proceedings of the workshop on Multi-source Multilingual Information Extraction and Summarization. pages 17–24. 1114 Zhiyuan Liu, Wenyi Huang, Yabin Zheng, and Maosong Sun. 2010. Automatic keyphrase extraction via topic decomposition. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. pages 366–376. Zhiyuan Liu, Peng Li, Yabin Zheng, and Maosong Sun. 2009. Clustering to find exemplar terms for keyphrase extraction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. pages 257–266. Patrice Lopez and Laurent Romary. 2010. Humb: Automatic key term extraction from scientific articles in grobid. In Proceedings of the 5th International Workshop on Semantic Evaluation. Association for Computational Linguistics, pages 248–251. Christopher D Manning, Prabhakar Raghavan, Hinrich Sch¨utze, et al. 2008. Introduction to information retrieval, volume 1. Cambridge university press Cambridge. Juan Martinez-Romo, Lourdes Araujo, and Andres Duque Fernandez. 2016. Semgraph: Extracting keyphrases following a novel semantic graph-based approach. Journal of the Association for Information Science and Technology 67(1):71–82. Olena Medelyan, Eibe Frank, and Ian H Witten. 2009. Human-competitive tagging using automatic keyphrase extraction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. ACL, pages 1318–1327. Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. pages 404–411. Thuy Dung Nguyen and Min-Yen Kan. 2007. Keyphrase extraction in scientific publications. In Asian Digital Libraries. Springer, pages 317–326. Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1998. The pagerank citation ranking: bringing order to the web. Technical report, Standford Digital Library Technologies Project . Vahed Qazvinian, Dragomir R. Radev, and Arzucan ¨Ozg¨ur. 2010. Citation summarization through keyphrase extraction. In Proceedings of the 23rd International Conference on Computational Linguistics. COLING ’10, pages 895–903. Peter D Turney. 2000. Learning algorithms for keyphrase extraction. Information Retrieval 2(4):303–336. Xiaojun Wan and Jianguo Xiao. 2008. Single document keyphrase extraction using neighborhood knowledge. In Proceedings of the 2008 American Association for Artificial Intelligence. pages 855– 860. Rui Wang, Wei Liu, and Chris McDonald. 2014. Corpus-independent generic keyphrase extraction using word embedding vectors. In Software Engineering Research Conference. page 39. Hongyuan Zha. 2002. Generic summarization and keyphrase extraction using mutual reinforcement principle and sentence clustering. In Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval. pages 113–120. Yongzheng Zhang, Evangelos Milios, and Nur ZincirHeywood. 2007. A comparative study on key phrase extraction methods in automatic web site summarization. Journal of Digital Information Management 5(5):323. Yongzheng Zhang, Nur Zincir-Heywood, and Evangelos Milios. 2004. World wide web site summarization. Web Intelligence and Agent Systems 2(1):39– 53. 1115
2017
102
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1116–1126 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1103 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1116–1126 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1103 Towards an Automatic Turing Test: Learning to Evaluate Dialogue Responses Ryan Lowe♥∗ Michael Noseworthy♥∗ Iulian V. Serban♦ Nicolas A.-Gontier♥ Yoshua Bengio♦‡ Joelle Pineau♥‡ ♥Reasoning and Learning Lab, School of Computer Science, McGill University ♦Montreal Institute for Learning Algorithms, Universit´e de Montr´eal ‡ CIFAR Senior Fellow Abstract Automatically evaluating the quality of dialogue responses for unstructured domains is a challenging problem. Unfortunately, existing automatic evaluation metrics are biased and correlate very poorly with human judgements of response quality. Yet having an accurate automatic evaluation procedure is crucial for dialogue research, as it allows rapid prototyping and testing of new models with fewer expensive human evaluations. In response to this challenge, we formulate automatic dialogue evaluation as a learning problem. We present an evaluation model (ADEM) that learns to predict human-like scores to input responses, using a new dataset of human response scores. We show that the ADEM model’s predictions correlate significantly, and at a level much higher than word-overlap metrics such as BLEU, with human judgements at both the utterance and systemlevel. We also show that ADEM can generalize to evaluating dialogue models unseen during training, an important step for automatic dialogue evaluation. 1 Introduction Building systems that can naturally and meaningfully converse with humans has been a central goal of artificial intelligence since the formulation of the Turing test (Turing, 1950). Research on one type of such systems, sometimes referred to as non-task-oriented dialogue systems, goes back to the mid-60s with Weizenbaum’s famous program ELIZA: a rule-based system mimicking a Rogerian psychotherapist by persistently either rephrasing statements or asking questions (Weizenbaum, ∗Indicates equal contribution. Context of Conversation Speaker A: Hey, what do you want to do tonight? Speaker B: Why don’t we go see a movie? Model Response Nah, let’s do something active. Reference Response Yeah, the film about Turing looks great! Figure 1: Example where word-overlap scores fail for dialogue evaluation; although the model response is reasonable, it has no words in common with the reference response, and thus would be given low scores by metrics such as BLEU. 1966). Recently, there has been a surge of interest towards building large-scale non-task-oriented dialogue systems using neural networks (Sordoni et al., 2015b; Shang et al., 2015; Vinyals and Le, 2015; Serban et al., 2016a; Li et al., 2015). These models are trained in an end-to-end manner to optimize a single objective, usually the likelihood of generating the responses from a fixed corpus. Such models have already had a substantial impact in industry, including Google’s Smart Reply system (Kannan et al., 2016), and Microsoft’s Xiaoice chatbot (Markoff and Mozur, 2015), which has over 20 million users. One of the challenges when developing such systems is to have a good way of measuring progress, in this case the performance of the chatbot. The Turing test provides one solution to the evaluation of dialogue systems, but there are limitations with its original formulation. The test requires live human interactions, which is expensive and difficult to scale up. Furthermore, the test requires carefully designing the instructions to the human interlocutors, in order to balance their behaviour and expectations so that different systems may be ranked accurately by performance. Although unavoidable, these instructions introduce bias into the evaluation measure. The more common approach of having 1116 humans evaluate the quality of dialogue system responses, rather than distinguish them from human responses, induces similar drawbacks in terms of time, expense, and lack of scalability. In the case of chatbots designed for specific conversation domains, it may also be difficult to find sufficient human evaluators with appropriate background in the topic (Lowe et al., 2015). Despite advances in neural network-based models, evaluating the quality of dialogue responses automatically remains a challenging and understudied problem in the non-task-oriented setting. The most widely used metric for evaluating such dialogue systems is BLEU (Papineni et al., 2002), a metric measuring word overlaps originally developed for machine translation. However, it has been shown that BLEU and other word-overlap metrics are biased and correlate poorly with human judgements of response quality (Liu et al., 2016). There are many obvious cases where these metrics fail, as they are often incapable of considering the semantic similarity between responses (see Figure 1). Despite this, many researchers still use BLEU to evaluate their dialogue models (Ritter et al., 2011; Sordoni et al., 2015b; Li et al., 2015; Galley et al., 2015; Li et al., 2016a), as there are few alternatives available that correlate with human judgements. While human evaluation should always be used to evaluate dialogue models, it is often too expensive and time-consuming to do this for every model specification (for example, for every combination of model hyperparameters). Therefore, having an accurate model that can evaluate dialogue response quality automatically — what could be considered an automatic Turing test — is critical in the quest for building human-like dialogue agents. To make progress towards this goal, we make the simplifying assumption that a ‘good’ chatbot is one whose responses are scored highly on appropriateness by human evaluators. We believe this is sufficient for making progress as current dialogue systems often generate inappropriate responses. We also find empirically that asking evaluators for other metrics results in either low inter-annotator agreement, or the scores are highly correlated with appropriateness (see supp. material). Thus, we collect a dataset of appropriateness scores to various dialogue responses, and we use this dataset to train an automatic dialogue evaluation model (ADEM). The model is trained in a semi-supervised manner using a hierarchical recur# Examples 4104 # Contexts 1026 # Training examples 2,872 # Validation examples 616 # Test examples 616 κ score (inter-annotator 0.63 correlation) Table 1: Statistics of the dialogue response evaluation dataset. Each example is in the form (context, model response, reference response, human score). rent neural network (RNN) to predict human scores. We show that ADEM scores correlate significantly with human judgement at both the utterance-level and system-level. We also show that ADEM can often generalize to evaluating new models, whose responses were unseen during training, making ADEM a strong first step towards effective automatic dialogue response evaluation.1 2 Data Collection To train a model to predict human scores to dialogue responses, we first collect a dataset of human judgements (scores) of Twitter responses using the crowdsourcing platform Amazon Mechanical Turk (AMT).2 The aim is to have accurate human scores for a variety of conversational responses — conditioned on dialogue contexts — which span the full range of response qualities. For example, the responses should include both relevant and irrelevant responses, both coherent and non-coherent responses and so on. To achieve this variety, we use candidate responses from several different models. Following (Liu et al., 2016), we use the following 4 sources of candidate responses: (1) a response selected by a TF-IDF retrieval-based model, (2) a response selected by the Dual Encoder (DE) (Lowe et al., 2015), (3) a response generated using the hierarchical recurrent encoder-decoder (HRED) model (Serban et al., 2016a), and (4) human-generated responses. It should be noted that the humangenerated candidate responses are not the reference responses from a fixed corpus, but novel human responses that are different from the reference. In addition to increasing response variety, this is necessary because we want our evaluation model to learn to compare the reference responses to the candidate responses. We provide the details of our 1Code and trained model parameters are available online: https://github.com/mike-n-7/ADEM. 2All data collection was conducted in accordance with the policies of the host institutions’ ethics board. 1117 AMT experiments in the supplemental material, including additional experiments suggesting that several other metrics are currently unlikely to be useful for building evaluation models. Note that, in order to maximize the number of responses obtained with a fixed budget, we only obtain one evaluation score per dialogue response in the dataset. To train evaluation models on human judgements, it is crucial that we obtain scores of responses that lie near the distribution produced by advanced models. This is why we use the Twitter Corpus (Ritter et al., 2011), as such models are pre-trained and readily available. Further, the set of topics discussed is quite broad — as opposed to the very specific Ubuntu Dialogue Corpus (Lowe et al., 2015) — and therefore the model may also be suited to other chit-chat domains. Finally, since it does not require domain specific knowledge (e.g. technical knowledge), it should be easy for AMT workers to annotate. 3 Technical Background 3.1 Recurrent Neural Networks Recurrent neural networks (RNNs) are a type of neural network with time-delayed connections between the internal units. This leads to the formation of a hidden state ht, which is updated for every input: ht = f(Whhht−1 + Wihxt), where Whh and Wih are parameter matrices, f is a non-linear activation function such as tanh, and xt is the input at time t. The hidden state allows for RNNs to better model sequential data, such as language. In this paper, we consider RNNs augmented with long-short term memory (LSTM) units (Hochreiter and Schmidhuber, 1997). LSTMs add a set of gates to the RNN that allow it to learn how much to update the hidden state. LSTMs are one of the most well-established methods for dealing with the vanishing gradient problem in recurrent networks (Hochreiter, 1991; Bengio et al., 1994). 3.2 Word-Overlap Metrics One of the most popular approaches for automatically evaluating the quality of dialogue responses is by computing their word overlap with the reference response. In particular, the most popular metrics are the BLEU and METEOR scores used for machine translation, and the ROUGE score used for automatic summarization. While these metrics tend to correlate with human judgements in their target domains, they have recently been shown to highly biased and correlate very poorly with human judgements for dialogue response evaluation (Liu et al., 2016). We briefly describe BLEU here, and provide a more detailed summary of word-overlap metrics in the supplemental material. BLEU BLEU (Papineni et al., 2002) analyzes the co-occurrences of n-grams in the reference and the proposed responses. It computes the n-gram precision for the whole dataset, which is then multiplied by a brevity penalty to penalize short translations. For BLEU-N, N denotes the largest value of ngrams considered (usually N = 4). Drawbacks One of the major drawbacks of word-overlap metrics is their failure in capturing the semantic similarity (and other structure) between the model and reference responses when there are few or no common words. This problem is less critical for machine translation; since the set of reasonable translations of a given sentence or document is rather small, one can reasonably infer the quality of a translated sentence by only measuring the word-overlap between it and one (or a few) reference translations. However, in dialogue, the set of appropriate responses given a context is much larger (Artstein et al., 2009); in other words, there is a very high response diversity that is unlikely to be captured by word-overlap comparison to a single response. Further, word-overlap scores are computed directly between the model and reference responses. As such, they do not consider the context of the conversation. While this may be a reasonable assumption in machine translation, it is not the case for dialogue; whether a model response is an adequate substitute for the reference response is clearly context-dependent. For example, the two responses in Figure 1 are equally appropriate given the context. However, if we simply change the context to: “Have you heard of any good movies recently?”, the model response is no longer relevant while the reference response remains valid. 4 An Automatic Dialogue Evaluation Model (ADEM) To overcome the problems of evaluation with wordoverlap metrics, we aim to construct a dialogue evaluation model that: (1) captures semantic similarity beyond word overlap statistics, and (2) exploits both the context and the reference response to calculate its score for the model response. We 1118 Figure 2: The ADEM model, which uses a hierarchical encoder to produce the context embedding c. call this evaluation model ADEM. ADEM learns distributed representations of the context, model response, and reference response using a hierarchical RNN encoder. Given the dialogue context c, reference response r, and model response ˆr, ADEM first encodes each of them into vectors (c, ˆr, and r, respectively) using the RNN encoder. Then, ADEM computes the score using a dot-product between the vector representations of c, r, and ˆr in a linearly transformed space: : score(c, r, ˆr) = (cT Mˆr + rT Nˆr −α)/β (1) where M, N ∈Rn are learned matrices initialized to the identity, and α, β are scalar constants used to initialize the model’s predictions in the range [1, 5]. The model is shown in Figure 2. The matrices M and N can be interpreted as linear projections that map the model response ˆr into the space of contexts and reference responses, respectively. The model gives high scores to responses that have similar vector representations to the context and reference response after this projection. The model is end-to-end differentiable; all the parameters can be learned by backpropagation. In our implementation, the parameters θ = {M, N} of the model are trained to minimize the squared error between the model predictions and the human score, with L2-regularization: L = X i=1:K [score(ci, ri, ˆri) −humani]2 + γ||θ||2 (2) where γ is a scalar constant. The simplicity of our model leads to both accurate predictions and fast evaluation (see supp. material), which is important to allow rapid prototyping of dialogue systems. The hierarchical RNN encoder in our model consists of two layers of RNNs (El Hihi and Bengio, 1995; Sordoni et al., 2015a). The lower-level RNN, the utterance-level encoder, takes as input words from the dialogue, and produces a vector output at the end of each utterance. The context-level encoder takes the representation of each utterance as input and outputs a vector representation of the context. This hierarchical structure is useful for incorporating information from early utterances in the context (Serban et al., 2016a). Following previous work, we take the last hidden state of the context-level encoder as the vector representation of the input utterance or context. The parameters of the RNN encoder are pretrained and are not learned from the human scores. An important point is that the ADEM procedure above is not a dialogue retrieval model: the fundamental difference is that ADEM has access to the reference response. Thus, ADEM can compare a model’s response to a known good response, which is significantly easier than inferring response quality from solely the context. Pre-training with VHRED We would like an evaluation model that can make accurate predictions from few labeled examples, since these examples are expensive to obtain. We therefore employ semi-supervised learning, and use a pre-training procedure to learn the parameters of the encoder. In particular, we train the encoder as part of a neural dialogue model; we attach a third decoder RNN that takes the output of the encoder as input, and train it to predict the next utterance of a dialogue conditioned on the context. The dialogue model we employ for pre-training is the latent variable hierarchical recurrent encoderdecoder (VHRED) model (Serban et al., 2016b), shown in Figure 3. The VHRED model is an extension of the original hierarchical recurrent encoderdecoder (HRED) model (Serban et al., 2016a) with a turn-level stochastic latent variable. The dialogue context is encoded into a vector using our hierarchical encoder, and the VHRED then samples a Gaus1119 Figure 3: The VHRED model used for pre-training. The hierarchical structure of the RNN encoder is shown in the red box around the bottom half of the figure. After training using the VHRED procedure, the last hidden state of the context-level encoder is used as a vector representation of the input text. sian variable that is used to condition the decoder (see supplemental material for further details). After training VHRED, we use the last hidden state of the context-level encoder, when c, r, and ˆr are fed as input, as the vector representations for c, r, and ˆr, respectively. We use representations from the VHRED model as it produces more diverse and coherent responses compared to HRED. 5 Experiments 5.1 Experimental Procedure In order to reduce the effective vocabulary size, we use byte pair encoding (BPE) (Gage, 1994; Sennrich et al., 2015), which splits each word into sub-words or characters. We also use layer normalization (Ba et al., 2016) for the hierarchical encoder, which we found worked better at the task of dialogue generation than the related recurrent batch normalization (Ioffe and Szegedy, 2015; Cooijmans et al., 2016). To train the VHRED model, we employed several of the same techniques found in (Serban et al., 2016b) and (Bowman et al., 2016): we drop words in the decoder with a fixed rate of 25%, and we anneal the KL-divergence term linearly from 0 to 1 over the first 60,000 batches. We use Adam as our optimizer (Kingma and Ba, 2014). When training ADEM, we also employ a subsampling procedure based on the model response length. In particular, we divide the training examples into bins based on the number of words in a response and the score of that response. We then over-sample from bins across the same score to ensure that ADEM does not use response length to predict the score. This is because humans have a tendency to give a higher rating to shorter responses than to longer responses (Serban et al., 2016b), as shorter responses are often more generic and thus are more likely to be suitable to the context. Indeed, the test set Pearson correlation between response length and human score is 0.27. For training VHRED, we use a context embedding size of 2000. However, we found the ADEM model learned more effectively when this embedding size was reduced. Thus, after training VHRED, we use principal component analysis (PCA) (Pearson, 1901) to reduce the dimensionality of the context, model response, and reference response embeddings to n. We found experimentally that n = 50 provided the best performance. When training our models, we conduct early stopping on a separate validation set. For the evaluation dataset, we split the train/ validation/ test sets such that there is no context overlap (i.e. the contexts in the test set are unseen during training). 5.2 Results Utterance-level correlations We first present new utterance-level correlation results3 for existing 3We present both the Spearman correlation (computed on ranks, depicts monotonic relationships) and Pearson correlation (computed on true values, depicts linear relationships) 1120 (a) BLEU-2 (b) ROUGE (c) ADEM Figure 4: Scatter plot showing model against human scores, for BLEU-2 and ROUGE on the full dataset, and ADEM on the test set. We add Gaussian noise drawn from N(0, 0.3) to the integer human scores to better visualize the density of points, at the expense of appearing less correlated. Full dataset Test set Metric Spearman Pearson Spearman Pearson BLEU-2 0.039 (0.013) 0.081 (<0.001) 0.051 (0.254) 0.120 (<0.001) BLEU-4 0.051 (0.001) 0.025 (0.113) 0.063 (0.156) 0.073 (0.103) ROUGE 0.062 (<0.001) 0.114 (<0.001) 0.096 (0.031) 0.147 (<0.001) METEOR 0.021 (0.189) 0.022 (0.165) 0.013 (0.745) 0.021 (0.601) T2V 0.140 (<0.001) 0.141 (<0.001) 0.140 (<0.001) 0.141 (<0.001) VHRED -0.035 (0.062) -0.030 (0.106) -0.091 (0.023) -0.010 (0.805) Validation set Test set C-ADEM 0.338 (<0.001) 0.355 (<0.001) 0.366 (<0.001) 0.363 (<0.001) R-ADEM 0.404 (<0.001) 0.404 (<0.001) 0.352 (<0.001) 0.360 (<0.001) ADEM (T2V) 0.252 (<0.001) 0.265 (<0.001) 0.280 (<0.001) 0.287 (<0.001) ADEM 0.410 (<0.001) 0.418 (<0.001) 0.428 (<0.001) 0.436 (<0.001) Table 2: Correlation between metrics and human judgements, with p-values shown in brackets. ‘ADEM (T2V)’ indicates ADEM with tweet2vec embeddings (Dhingra et al., 2016), and ‘VHRED’ indicates the dot product of VHRED embeddings (i.e. ADEM at initialization). C- and R-ADEM represent the ADEM model trained to only compare the model response to the context or reference response, respectively. We compute the baseline metric scores (top) on the full dataset to provide a more accurate estimate of their scores (as they are not trained on a training set). word-overlap metrics, in addition to results with embedding baselines and ADEM, in Table 2. The baseline metrics are evaluated on the entire dataset of 4,104 responses to provide the most accurate estimate of the score. 4 We measure the correlation for ADEM on the validation and test sets, which constitute 616 responses each. We also conduct an analysis of the response data from (Liu et al., 2016), where the pre-processing is standardized by removing ‘<first speaker>’ tokens at the beginning of each utterance. The results are detailed in the supplemental material. We can observe from both this data, and the new data in Table 2, that the correlations for the word-overlap metrics are even lower than estimated in previous scores. 4Note that our word-overlap correlation results in Table 2 are also lower than those presented in (Galley et al., 2015). This is because Galley et al. measure corpus-level correlation, i.e. correlation averaged across different subsets (of size 100) of the data, and pre-filter for high-quality reference responses. studies (Liu et al., 2016; Galley et al., 2015). In particular, this is the case for BLEU-4, which has frequently been used for dialogue response evaluation (Ritter et al., 2011; Sordoni et al., 2015b; Li et al., 2015; Galley et al., 2015; Li et al., 2016a). We can see from Table 2 that ADEM correlates far better with human judgement than the wordoverlap baselines. This is further illustrated by the scatterplots in Figure 4. We also compare with ADEM using tweet2vec embeddings (Dhingra et al., 2016). In this case, instead of using the VHRED pre-training method presented in Section 4, we use off-the-shelf embeddings for c, r, and ˆr, and finetune M and N on our dataset. These tweet2vec embeddings are computed at the character-level with a bidirectional GRU on a Twitter dataset for hashtag prediction (Dhingra et al., 2016). We find that they obtain reasonable but inferior performance compared to using VHRED embeddings. 1121 Figure 5: Scatterplots depicting the system-level correlation results for ADEM, BLEU-2, BLEU-4,and ROUGE on the test set. Each point represents the average scores for the responses from a dialogue model (TFIDF, DE, HRED, human). Human scores are shown on the horizontal axis, with normalized metric scores on the vertical axis. The ideal metric has a perfectly linear relationship. System-level correlations We show the systemlevel correlations for various metrics in Table 3, and present it visually in Figure 5. Each point in the scatterplots represents a dialogue model; humans give low scores to TFIDF and DE responses, higher scores to HRED and the highest scores to other human responses. It is clear that existing word-overlap metrics are incapable of capturing this relationship for even 4 models. This renders them completely deficient for dialogue evaluation. However, ADEM produces almost the same model ranking as humans, achieving a significant Pearson correlation of 0.954.5 Thus, ADEM correlates well with humans both at the response and system level. Generalization to previously unseen models When ADEM is used in practice, it will take as input responses from a new model that it has not seen during training. Thus, it is crucial that ADEM correlates with human judgements for new models. We test ADEM’s generalization ability by performing a leave-one-out evaluation. For each dialogue model that was the source of response data for training ADEM (TF-IDF, Dual Encoder, HRED, humans), we conduct an experiment where we train on all model responses except those from the chosen model, and test only on the model that was unseen during training. The results are given in Table 4. We observe that the ADEM model is able to generalize for all models except the Dual Encoder. This is particularly surprising for the HRED model; in this case, ADEM was trained only on responses that were written by humans (from retrieval models or human-generated), but is able to generalize to responses produced by a generative neural network model. When testing on the entire test set, 5For comparison, BLEU achieves a system-level correlation of 0.99 on 5 models in the translation domain (Papineni et al., 2002). Metric Pearson BLEU-1 -0.079 (0.921) BLEU-2 0.308 (0.692) BLEU-3 -0.537 (0.463) BLEU-4 -0.536 (0.464) ROUGE 0.268 (0.732) ADEM 0.954 (0.046) Table 3: System-level correlation, with the p-value in brackets. the model achieves comparable correlations to the ADEM model that was trained on 25% less data selected at random. Qualitative Analysis To illustrate some strengths and weaknesses of ADEM, we show human and ADEM scores for each of the responses to various contexts in Table 5. There are several instances where ADEM predicts accurately: in particular, ADEM is often very good at assigning low scores to poor responses. This seen in the first two contexts, where most of the responses given a score of 1 from humans are given scores less than 2 by ADEM. The single exception in response (4) for the second context seems somewhat appropriate and should perhaps have been scored higher by the human evaluator. There are also several instances where the model assigns high scores to suitable responses, as in the first two contexts. One drawback we observed is that ADEM tends to be too conservative when predicting response scores. This is the case in the third context, where the model assigns low scores to most of the responses that a human rated highly. This behaviour is likely due to the squared error loss used to train ADEM; since the model receives a large penalty for incorrectly predicting an extreme value, it learns to predict scores closer to the average human score. We provide many more experiments, including investigation of evaluation speed, learning curves, data efficiency, a failure analysis, and the primary source of improvement over word-overlap metrics 1122 Test on full dataset Test on removed model responses Data Removed Spearman Pearson Spearman Pearson TF-IDF 0.406 (<0.001) 0.409 (<0.001) 0.186 (0.021) 0.196 (0.015) Dual Encoder 0.364 (<0.001) 0.373 (<0.001) 0.026 (0.749) 0.027 (0.736) HRED 0.393 (<0.001) 0.396 (<0.001) 0.151 (0.060) 0.148 (<0.070) Human 0.292 (<0.001) 0.298 (<0.001) 0.216 (<0.010) 0.148 (<0.070) Average 0.364 0.369 0.145 0.130 25% at random 0.378 (<0.001) 0.384 (<0.001) — — Table 4: Correlation for ADEM when various model responses are removed from the training set. The left two columns show performance on the entire test set, and the right two columns show performance on responses only from the dialogue model not seen during training. The last row (25% at random) corresponds to the ADEM model trained on all model responses, but with the same amount of training data as the model above (i.e. 25% less data than the full training set). Context Reference response Model responses Human score ADEM score photo to see my television debut go to some. some on <url> - hehe <url> → it really was you? i thought ppl were recognizing someone who looked like you! were the oysters worth the wait? yeah it was me . haha i ’d kinda forgotten about it it was filmed a while ago 1) i’m not sure. i just don’t know what to do with it. 2) you heard the horsepower productions remix of lee scratch perry’s ‘exercising’ off his ‘mighty upsetter’ album? 3) you wont chug a fuzzy peach navel 4) they were! 3 1 1 5 1.602 1.513 1.744 3.274 just beat call of duty!! →want a cookie? →yes!! →come get it im in kenmore at the moment 1) i’m gonna get a new phone some moro 2) no way man. 3) wow i just got a free pizza coupon! get yours before theres no more! <url> 4) i’m going to go to the mall. 1 5 1 1 1.848 4.265 0.921 2.634 am i out of twitter jail yet? testing → yeah. i posted bail →thanks. i am a right chatter tweetbox on sundays. same happened last sunday lol any news on meeting our user ? i go to the us on friday and i don ’t want to miss anything arranged 1) i’m not sure if i’m going to be able to get it. 2) good to see another mac user in the leadership ranks 3) awww poor baby hope u get to feeling better soon. maybe some many work days at piedmont 4) did you tweet too much? 3 4 2 5 1.912 1.417 1.123 2.539 Table 5: Examples of scores given by the ADEM model. in the supplemental material. 6 Related Work Related to our approach is the literature on novel methods for the evaluation of machine translation systems, especially through the WMT evaluation task (Callison-Burch et al., 2011; Mach´acek and Bojar, 2014; Stanojevic et al., 2015). In particular, (Albrecht and Hwa, 2007; Gupta et al., 2015) have proposed to evaluate machine translation systems using Regression and Tree-LSTMs respectively. Their approach differs from ours as, in the dialogue domain, we must additionally condition our score on the context of the conversation, which is not necessary in translation. There has also been related work on estimating the quality of responses in chat-oriented dialogue systems. (DeVault et al., 2011) train an automatic dialogue policy evaluation metric from 19 structured role-playing sessions, enriched with paraphrases and external referee annotations. (Gandhe and Traum, 2016) propose a semi-automatic evaluation metric for dialogue coherence, similar to BLEU and ROUGE, based on ‘wizard of Oz’ type data.6 (Xiang et al., 2014) propose a framework to predict utterance-level problematic situations in a dataset of Chinese dialogues using intent and sentiment factors. Finally, (Higashinaka et al., 2014) train a classifier to distinguish user utterances from system-generated utterances using various dialogue features, such as dialogue acts, question types, and predicate-argument structures. Several recent approaches use hand-crafted reward features to train dialogue models using reinforcement learning (RL). For example, (Li et al., 2016b) use features related to ease of answering and information flow, and (Yu et al., 2016) use metrics related to turn-level appropriateness and conversational depth. These metrics are based on hand-crafted features, which only capture a small set of relevant aspects; this inevitably leads to suboptimal performance, and it is unclear whether such objectives are preferable over retrieval-based crossentropy or word-level maximum log-likelihood objectives. Furthermore, many of these metrics are computed at the conversation-level, and are not available for evaluating single dialogue responses. 6In ‘wizard of Oz’ scenarios, humans play the role of the dialogue system, usually unbeknown to the interlocutors. 1123 The metrics that can be computed at the responselevel could be incorporated into our framework, for example by adding a term to equation 1 consisting of a dot product between these features and a vector of learned parameters. There has been significant work on evaluation methods for task-oriented dialogue systems, which attempt to solve a user’s task such as finding a restaurant. These methods include the PARADISE framework (Walker et al., 1997) and MeMo (M¨oller et al., 2006), which consider a task completion signal. PARADISE in particular is perhaps the first work on learning an automatic evaluation function for dialogue, accomplished through linear regression. However, PARADISE requires that one can measure task completion and task complexity, which are not available in our setting. 7 Discussion We use the Twitter Corpus to train our models as it contains a broad range of non-task-oriented conversations and it has been used to train many state-ofthe-art models. However, our model could easily be extended to other general-purpose datasets, such as Reddit, once similar pre-trained models become publicly available. Such models are necessary even for creating a test set in a new domain, which will help us determine if ADEM generalizes to related dialogue domains. We leave investigating the domain transfer ability of ADEM for future work. The evaluation model proposed in this paper favours dialogue models that generate responses that are rated as highly appropriate by humans. It is likely that this property does not fully capture the desired end-goal of chatbot systems. For example, one issue with building models to approximate human judgements of response quality is the problem of generic responses. Since humans often provide high scores to generic responses due to their appropriateness for many given contexts (Shang et al., 2016), a model trained to predict these scores will exhibit the same behaviour. An important direction for future work is modifying ADEM such that it is not subject to this bias. This could be done, for example, by censoring ADEM’s representations (Edwards and Storkey, 2016) such that they do not contain any information about length. Alternatively, one can combine this with an adversarial evaluation model (Kannan and Vinyals, 2017; Li et al., 2017) that assigns a score based on how easy it is to distinguish the dialogue model responses from human responses. In this case, a model that generates generic responses will easily be distinguishable and obtain a low score. An important direction of future research is building models that can evaluate the capability of a dialogue system to have an engaging and meaningful interaction with a human. Compared to evaluating a single response, this evaluation is arguably closer to the end-goal of chatbots. However, such an evaluation is extremely challenging to do in a completely automatic way. We view the evaluation procedure presented in this paper as an important step towards this goal; current dialogue systems are incapable of generating responses that are rated as highly appropriate by humans, and we believe our evaluation model will be useful for measuring and facilitating progress in this direction. References Joshua Albrecht and Rebecca Hwa. 2007. Regression for sentence-level mt evaluation with pseudo references. In ACL. Ron Artstein, Sudeep Gandhe, Jillian Gerten, Anton Leuski, and David Traum. 2009. Semi-formal evaluation of conversational characters. In Languages: From Formal to Natural, Springer, pages 22–35. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450 . Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term dependencies with gradient descent is difficult. IEEE transactions on neural networks 5(2):157–166. Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. COLING . Chris Callison-Burch, Philipp Koehn, Christof Monz, and Omar F Zaidan. 2011. Findings of the 2011 workshop on statistical machine translation. In Proceedings of the Sixth Workshop on Statistical Machine Translation. Association for Computational Linguistics, pages 22–64. Tim Cooijmans, Nicolas Ballas, C´esar Laurent, and Aaron Courville. 2016. Recurrent batch normalization. arXiv preprint arXiv:1603.09025 . David DeVault, Anton Leuski, and Kenji Sagae. 2011. Toward learning and evaluation of dialogue policies with text examples. In Proceedings of the SIGDIAL 2011 Conference. Association for Computational Linguistics, pages 39–48. 1124 Bhuwan Dhingra, Zhong Zhou, Dylan Fitzpatrick, Michael Muehl, and William W Cohen. 2016. Tweet2vec: Character-based distributed representations for social media. arXiv preprint arXiv:1605.03481 . Harrison Edwards and Amos Storkey. 2016. Censoring representations with an adversary. ICLR . Salah El Hihi and Yoshua Bengio. 1995. Hierarchical recurrent neural networks for long-term dependencies. In NIPS. Citeseer, volume 400, page 409. Philip Gage. 1994. A new algorithm for data compression. The C Users Journal 12(2):23–38. Michel Galley, Chris Brockett, Alessandro Sordoni, Yangfeng Ji, Michael Auli, Chris Quirk, Margaret Mitchell, Jianfeng Gao, and Bill Dolan. 2015. deltableu: A discriminative metric for generation tasks with intrinsically diverse targets. arXiv preprint arXiv:1506.06863 . Sudeep Gandhe and David Traum. 2016. A semiautomated evaluation metric for dialogue model coherence. In Situated Dialog in Speech-Based Human-Computer Interaction, Springer, pages 217– 225. Rohit Gupta, Constantin Orasan, and Josef van Genabith. 2015. Reval: A simple and effective machine translation evaluation metric based on recurrent neural networks. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Ryuichiro Higashinaka, Toyomi Meguro, Kenji Imamura, Hiroaki Sugiyama, Toshiro Makino, and Yoshihiro Matsuo. 2014. Evaluating coherence in open domain conversational systems. In INTERSPEECH. pages 130–134. Sepp Hochreiter. 1991. Untersuchungen zu dynamischen neuronalen netzen. Diploma, Technische Universit¨at M¨unchen page 91. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735– 1780. Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 . Anjuli Kannan, Karol Kurach, Sujith Ravi, Tobias Kaufmann, Andrew Tomkins, Balint Miklos, Greg Corrado, L´aszl´o Luk´acs, Marina Ganea, Peter Young, et al. 2016. Smart reply: Automated response suggestion for email. In Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD). volume 36, pages 495– 503. Anjuli Kannan and Oriol Vinyals. 2017. Adversarial evaluation of dialogue models. arXiv preprint arXiv:1701.08198 . Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015. A diversity-promoting objective function for neural conversation models. arXiv preprint arXiv:1510.03055 . Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A persona-based neural conversation model. arXiv preprint arXiv:1603.06155 . Jiwei Li, Will Monroe, and Dan Jurafsky. 2017. Learning to decode for future success. arXiv preprint arXiv:1701.06549 . Jiwei Li, Will Monroe, Alan Ritter, and Dan Jurafsky. 2016b. Deep reinforcement learning for dialogue generation. arXiv preprint arXiv:1606.01541 . Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. arXiv preprint arXiv:1603.08023 . Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. arXiv preprint arXiv:1506.08909 . Matouˇs Mach´acek and Ondrej Bojar. 2014. Results of the wmt14 metrics shared task. In Proceedings of the Ninth Workshop on Statistical Machine Translation. Citeseer, pages 293–301. J. Markoff and P. Mozur. 2015. For sympathetic ear, more chinese turn to smartphone program. NY Times . Sebastian M¨oller, Roman Englert, Klaus-Peter Engelbrecht, Verena Vanessa Hafner, Anthony Jameson, Antti Oulasvirta, Alexander Raake, and Norbert Reithinger. 2006. Memo: towards automatic usability evaluation of spoken dialogue services by user error simulations. In INTERSPEECH. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics. Association for Computational Linguistics, pages 311–318. Karl Pearson. 1901. Principal components analysis. The London, Edinburgh and Dublin Philosophical Magazine and Journal 6(2):566. Alan Ritter, Colin Cherry, and William B Dolan. 2011. Data-driven response generation in social media. In Proceedings of the conference on empirical methods in natural language processing. Association for Computational Linguistics, pages 583–593. 1125 Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909 . Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016a. Building end-to-end dialogue systems using generative hierarchical neural network models. In AAAI. pages 3776–3784. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2016b. A hierarchical latent variable encoder-decoder model for generating dialogues. arXiv preprint arXiv:1605.06069 . Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. arXiv preprint arXiv:1503.02364 . Lifeng Shang, Tetsuya Sakai, Zhengdong Lu, Hang Li, Ryuichiro Higashinaka, and Yusuke Miyao. 2016. Overview of the ntcir-12 short text conversation task. Proceedings of NTCIR-12 pages 473–484. Alessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob Grue Simonsen, and JianYun Nie. 2015a. A hierarchical recurrent encoderdecoder for generative context-aware query suggestion. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management. ACM, pages 553–562. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015b. A neural network approach to context-sensitive generation of conversational responses. arXiv preprint arXiv:1506.06714 . Miloˇs Stanojevic, Amir Kamran, Philipp Koehn, and Ondrej Bojar. 2015. Results of the wmt15 metrics shared task. In Proceedings of the Tenth Workshop on Statistical Machine Translation. pages 256–273. Alan M Turing. 1950. Computing machinery and intelligence. Mind 59(236):433–460. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. arXiv preprint arXiv:1506.05869 . Marilyn A Walker, Diane J Litman, Candace A Kamm, and Alicia Abella. 1997. Paradise: A framework for evaluating spoken dialogue agents. In Proceedings of the eighth conference on European chapter of the Association for Computational Linguistics. Association for Computational Linguistics, pages 271–280. J. Weizenbaum. 1966. ELIZAa computer program for the study of natural language communication between man and machine. Communications of the ACM 9(1):36–45. Yang Xiang, Yaoyun Zhang, Xiaoqiang Zhou, Xiaolong Wang, and Yang Qin. 2014. Problematic situation analysis and automatic recognition for chi-nese online conversational system. Proc. CLP pages 43– 51. Zhou Yu, Ziyu Xu, Alan W Black, and Alex I Rudnicky. 2016. Strategy and policy learning for nontask-oriented conversational systems. In 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue. page 404. 1126
2017
103
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1127–1138 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1104 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1127–1138 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1104 A Transition-Based Directed Acyclic Graph Parser for UCCA Daniel Hershcovich1,2 Omri Abend2 1The Edmond and Lily Safra Center for Brain Sciences 2School of Computer Science and Engineering Hebrew University of Jerusalem {danielh,oabend,arir}@cs.huji.ac.il Ari Rappoport2 Abstract We present the first parser for UCCA, a cross-linguistically applicable framework for semantic representation, which builds on extensive typological work and supports rapid annotation. UCCA poses a challenge for existing parsing techniques, as it exhibits reentrancy (resulting in DAG structures), discontinuous structures and non-terminal nodes corresponding to complex semantic units. To our knowledge, the conjunction of these formal properties is not supported by any existing parser. Our transition-based parser, which uses a novel transition set and features based on bidirectional LSTMs, has value not just for UCCA parsing: its ability to handle more general graph structures can inform the development of parsers for other semantic DAG structures, and in languages that frequently use discontinuous structures. 1 Introduction Universal Conceptual Cognitive Annotation (UCCA, Abend and Rappoport, 2013) is a crosslinguistically applicable semantic representation scheme, building on the established Basic Linguistic Theory typological framework (Dixon, 2010a,b, 2012), and Cognitive Linguistics literature (Croft and Cruse, 2004). It has demonstrated applicability to multiple languages, including English, French, German and Czech, support for rapid annotation by non-experts (assisted by an accessible annotation interface (Abend et al., 2017)), and stability under translation (Sulem et al., 2015). It has also proven useful for machine translation evaluation (Birch et al., 2016). UCCA differs from syntactic schemes in terms of content and formal structure. It exhibits reentrancy, discontinuous nodes and non-terminals, which no single existing parser supports. Lacking a parser, UCCA’s applicability has been so far limited, a gap this work addresses. We present the first UCCA parser, TUPA (Transition-based UCCA Parser), building on recent advances in discontinuous constituency and dependency graph parsing, and further introducing novel transitions and features for UCCA. Transition-based techniques are a natural starting point for UCCA parsing, given the conceptual similarity of UCCA’s distinctions, centered around predicate-argument structures, to distinctions expressed by dependency schemes, and the achievements of transition-based methods in dependency parsing (Dyer et al., 2015; Andor et al., 2016; Kiperwasser and Goldberg, 2016). We are further motivated by the strength of transition-based methods in related tasks, including dependency graph parsing (Sagae and Tsujii, 2008; Ribeyre et al., 2014; Tokg¨oz and Eryi˘git, 2015), constituency parsing (Sagae and Lavie, 2005; Zhang and Clark, 2009; Zhu et al., 2013; Maier, 2015; Maier and Lichte, 2016), AMR parsing (Wang et al., 2015a,b, 2016; Misra and Artzi, 2016; Goodman et al., 2016; Zhou et al., 2016; Damonte et al., 2017) and CCG parsing (Zhang and Clark, 2011; Ambati et al., 2015, 2016). We evaluate TUPA on the English UCCA corpora, including in-domain and out-of-domain settings. To assess the ability of existing parsers to tackle the task, we develop a conversion procedure from UCCA to bilexical graphs and trees. Results show superior performance for TUPA, demonstrating the effectiveness of the presented approach.1 The rest of the paper is structured as follows: 1All parsing and conversion code, as well as trained parser models, are available at https://github.com/ danielhers/tupa. 1127 Section 2 describes UCCA in more detail. Section 3 introduces TUPA. Section 4 discusses the data and experimental setup. Section 5 presents the experimental results. Section 6 summarizes related work, and Section 7 concludes the paper. 2 The UCCA Scheme UCCA graphs are labeled, directed acyclic graphs (DAGs), whose leaves correspond to the tokens of the text. A node (or unit) corresponds to a terminal or to several terminals (not necessarily contiguous) viewed as a single entity according to semantic or cognitive considerations. Edges bear a category, indicating the role of the sub-unit in the parent relation. Figure 1 presents a few examples. UCCA is a multi-layered representation, where each layer corresponds to a “module” of semantic distinctions. UCCA’s foundational layer, targeted in this paper, covers the predicate-argument structure evoked by predicates of all grammatical categories (verbal, nominal, adjectival and others), the inter-relations between them, and other major linguistic phenomena such as coordination and multi-word expressions. The layer’s basic notion is the scene, describing a state, action, movement or some other relation that evolves in time. Each scene contains one main relation (marked as either a Process or a State), as well as one or more Participants. For example, the sentence “After graduation, John moved to Paris” (Figure 1a) contains two scenes, whose main relations are “graduation” and “moved”. “John” is a Participant in both scenes, while “Paris” only in the latter. Further categories account for inter-scene relations and the internal structure of complex arguments and relations (e.g. coordination, multi-word expressions and modification). One incoming edge for each non-root node is marked as primary, and the rest (mostly used for implicit relations and arguments) as remote edges, a distinction made by the annotator. The primary edges thus form a tree structure, whereas the remote edges enable reentrancy, forming a DAG. While parsing technology in general, and transition-based parsing in particular, is wellestablished for syntactic parsing, UCCA has several distinct properties that distinguish it from syntactic representations, mostly UCCA’s tendency to abstract away from syntactic detail that do not affect argument structure. For instance, consider the following examples where the concept of a scene (a) After L graduation P H , U John A moved P to R Paris C A H A (b) John A gave C everything up C P A P process A participant H linked scene C center R relator N connector L scene linker U punctuation F function unit (c) John C and N Mary C ’s F A trip P home A Figure 1: UCCA structures demonstrating three structural properties exhibited by the scheme. (a) includes a remote edge (dashed), resulting in “John” having two parents. (b) includes a discontinuous unit (“gave ... up”). (c) includes a coordination construction (“John and Mary”). Pre-terminal nodes are omitted for brevity. Right: legend of edge labels. has a different rationale from the syntactic concept of a clause. First, non-verbal predicates in UCCA are represented like verbal ones, such as when they appear in copula clauses or noun phrases. Indeed, in Figure 1a, “graduation” and “moved” are considered separate events, despite appearing in the same clause. Second, in the same example, “John” is marked as a (remote) Participant in the graduation scene, despite not being overtly marked. Third, consider the possessive construction in Figure 1c. While in UCCA “trip” evokes a scene in which “John and Mary” is a Participant, a syntactic scheme would analyze this phrase similarly to “John and Mary’s shoes”. These examples demonstrate that a UCCA parser, and more generally semantic parsers, face an additional level of ambiguity compared to their syntactic counterparts (e.g., “after graduation” is formally very similar to “after 2pm”, which does not evoke a scene). Section 6 discusses UCCA in the context of other semantic schemes, such as AMR (Banarescu et al., 2013). Alongside recent progress in dependency parsing into projective trees, there is increasing interest in parsing into representations with more general structural properties (see Section 6). One such property is reentrancy, namely the sharing of semantic units between predicates. For instance, in Figure 1a, “John” is an argument of both “gradu1128 ation” and “moved”, yielding a DAG rather than a tree. A second property is discontinuity, as in Figure 1b, where “gave up” forms a discontinuous semantic unit. Discontinuities are pervasive, e.g., with multi-word expressions (Schneider et al., 2014). Finally, unlike most dependency schemes, UCCA uses non-terminal nodes to represent units comprising more than one word. The use of non-terminal nodes is motivated by constructions with no clear head, including coordination structures (e.g., “John and Mary” in Figure 1c), some multi-word expressions (e.g., “The Haves and the Have Nots”), and prepositional phrases (either the preposition or the head noun can serve as the constituent’s head). To our knowledge, no existing parser supports all structural properties required for UCCA parsing. 3 Transition-based UCCA Parsing We now turn to presenting TUPA. Building on previous work on parsing reentrancies, discontinuities and non-terminal nodes, we define an extended set of transitions and features that supports the conjunction of these properties. Transition-based parsers (Nivre, 2003) scan the text from start to end, and create the parse incrementally by applying a transition at each step to the parser’s state, defined using three data structures: a buffer B of tokens and nodes to be processed, a stack S of nodes currently being processed, and a graph G = (V, E, ℓ) of constructed nodes and edges, where V is the set of nodes, E is the set of edges, and ℓ: E →L is the label function, L being the set of possible labels. Some states are marked as terminal, meaning that G is the final output. A classifier is used at each step to select the next transition based on features encoding the parser’s current state. During training, an oracle creates training instances for the classifier, based on gold-standard annotations. Transition Set. Given a sequence of tokens w1, . . . , wn, we predict a UCCA graph G over the sequence. Parsing starts with a single node on the stack (an artificial root node), and the input tokens in the buffer. Figure 2 shows the transition set. In addition to the standard SHIFT and REDUCE operations, we follow previous work in transition-based constituency parsing (Sagae and Lavie, 2005), adding the NODE transition for creating new non-terminal nodes. For every X ∈L, NODEX creates a new node on the buffer as a parent of the first element on the stack, with an Xlabeled edge. LEFT-EDGEX and RIGHT-EDGEX create a new primary X-labeled edge between the first two elements on the stack, where the parent is the left or the right node, respectively. As a UCCA node may only have one incoming primary edge, EDGE transitions are disallowed if the child node already has an incoming primary edge. LEFTREMOTEX and RIGHT-REMOTEX do not have this restriction, and the created edge is additionally marked as remote. We distinguish between these two pairs of transitions to allow the parser to create remote edges without the possibility of producing invalid graphs. To support the prediction of multiple parents, node and edge transitions leave the stack unchanged, as in other work on transition-based dependency graph parsing (Sagae and Tsujii, 2008; Ribeyre et al., 2014; Tokg¨oz and Eryi˘git, 2015). REDUCE pops the stack, to allow removing a node once all its edges have been created. To handle discontinuous nodes, SWAP pops the second node on the stack and adds it to the top of the buffer, as with the similarly named transition in previous work (Nivre, 2009; Maier, 2015). Finally, FINISH pops the root node and marks the state as terminal. Classifier. The choice of classifier and feature representation has been shown to play an important role in transition-based parsing (Chen and Manning, 2014; Andor et al., 2016; Kiperwasser and Goldberg, 2016). To investigate the impact of the type of transition classifier in UCCA parsing, we experiment with three different models. 1. Starting with a simple and common choice (e.g., Maier and Lichte, 2016), TUPASparse uses a linear classifier with sparse features, trained with the averaged structured perceptron algorithm (Collins and Roark, 2004) and MINUPDATE (Goldberg and Elhadad, 2011): each feature requires a minimum number of updates in training to be included in the model.2 2. Changing the model to a feedforward neural network with dense embedding features, TUPAMLP (“multi-layer perceptron”), uses an architecture similar to that of Chen and Manning (2014), but with two rectified linear layers 2We also experimented with a linear model using dense embedding features, trained with the averaged structured perceptron algorithm. It performed worse than the sparse perceptron model and was hence discarded. 1129 Before Transition Transition After Transition Condition Stack Buffer Nodes Edges Stack Buffer Nodes Edges Terminal? S x | B V E SHIFT S | x B V E − S | x B V E REDUCE S B V E − S | x B V E NODEX S | x y | B V ∪{y} E ∪{(y, x)X} − x ̸= root S | y, x B V E LEFT-EDGEX S | y, x B V E ∪{(x, y)X} −    x ̸∈w1:n, y ̸= root, y ̸ ;G x S | x, y B V E RIGHT-EDGEX S | x, y B V E ∪{(x, y)X} − S | y, x B V E LEFT-REMOTEX S | y, x B V E ∪{(x, y)∗ X} − S | x, y B V E RIGHT-REMOTEX S | x, y B V E ∪{(x, y)∗ X} − S | x, y B V E SWAP S | y x | B V E − i(x) < i(y) [root] ∅ V E FINISH ∅ ∅ V E + Figure 2: The transition set of TUPA. We write the stack with its top to the right and the buffer with its head to the left. (·, ·)X denotes a primary X-labeled edge, and (·, ·)∗ X a remote X-labeled edge. i(x) is a running index for the created nodes. In addition to the specified conditions, the prospective child in an EDGE transition must not already have a primary parent. instead of one layer with cube activation. The embeddings and classifier are trained jointly. 3. Finally, TUPABiLSTM uses a bidirectional LSTM for feature representation, on top of the dense embedding features, an architecture similar to Kiperwasser and Goldberg (2016). The BiLSTM runs on the input tokens in forward and backward directions, yielding a vector representation that is then concatenated with dense features representing the parser state (e.g., existing edge labels and previous parser actions; see below). This representation is then fed into a feedforward network similar to TUPAMLP. The feedforward layers, BiLSTM and embeddings are all trained jointly. For all classifiers, inference is performed greedily, i.e., without beam search. Hyperparameters are tuned on the development set (see Section 4). Features. TUPASparse uses binary indicator features representing the words, POS tags, syntactic dependency labels and existing edge labels related to the top four stack elements and the next three buffer elements, in addition to their children and grandchildren in the graph. We also use bi- and trigram features based on these values (Zhang and Clark, 2009; Zhu et al., 2013), features related to discontinuous nodes (Maier, 2015, including separating punctuation and gap type), features representing existing edges and the number of parents and children, as well as the past actions taken by the parser. In addition, we use use a novel, UCCAspecific feature: number of remote children.3 For TUPAMLP and TUPABiLSTM, we replace all indicator features by a concatenation of the vector embeddings of all represented elements: words, 3See Appendix A for a full list of used feature templates. POS tags, syntactic dependency labels, edge labels, punctuation, gap type and parser actions. These embeddings are initialized randomly. We additionally use external word embeddings initialized with pre-trained word2vec vectors (Mikolov et al., 2013),4 updated during training. In addition to dropout between NN layers, we apply word dropout (Kiperwasser and Goldberg, 2016): with a certain probability, the embedding for a word is replaced with a zero vector. We do not apply word dropout to the external word embeddings. Finally, for all classifiers we add a novel realvalued feature to the input vector, ratio, corresponding to the ratio between the number of terminals to number of nodes in the graph G. This feature serves as a regularizer for the creation of new nodes, and should be beneficial for other transition-based constituency parsers too. Training. For training the transition classifiers, we use a dynamic oracle (Goldberg and Nivre, 2012), i.e., an oracle that outputs a set of optimal transitions: when applied to the current parser state, the gold standard graph is reachable from the resulting state. For example, the oracle would predict a NODE transition if the stack has on its top a parent in the gold graph that has not been created, but would predict a RIGHT-EDGE transition if the second stack element is a parent of the first element according to the gold graph and the edge between them has not been created. The transition predicted by the classifier is deemed correct and is applied to the parser state to reach the subsequent state, if the transition is included in the set of optimal transitions. Otherwise, a random optimal transition is applied, and for the perceptronbased parser, the classifier’s weights are updated 4https://goo.gl/6ovEhC 1130 Parser state S , B John moved to Paris . G After L graduation P H Transition classifier After LSTM LSTM LSTM LSTM graduation LSTM LSTM LSTM LSTM to LSTM LSTM LSTM LSTM Paris LSTM LSTM LSTM LSTM ... ... ... ... ... MLP NODEU Figure 3: Illustration of the TUPA model. Top: parser state (stack, buffer and intermediate graph). Bottom: TUPABiLTSM architecture. Vector representation for the input tokens is computed by two layers of bidirectional LSTMs. The vectors for specific tokens are concatenated with embedding and numeric features from the parser state (for existing edge labels, number of children, etc.), and fed into the MLP for selecting the next transition. according to the perceptron update rule. POS tags and syntactic dependency labels are extracted using spaCy (Honnibal and Johnson, 2015).5 We use the categorical cross-entropy objective function and optimize the NN classifiers with the Adam optimizer (Kingma and Ba, 2014). 4 Experimental Setup Data. We conduct our experiments on the UCCA Wikipedia corpus (henceforth, Wiki), and use the English part of the UCCA Twenty Thousand Leagues Under the Sea English-French parallel corpus (henceforth, 20K Leagues) as outof-domain data.6 Table 1 presents some statistics for the two corpora. We use passages of indices up to 676 of the Wiki corpus as our training set, passages 688–808 as development set, and passages 942–1028 as in-domain test set. While 5https://spacy.io 6http://cs.huji.ac.il/˜oabend/ucca.html Wiki 20K Train Dev Test Leagues # passages 300 34 33 154 # sentences 4268 454 503 506 # nodes 298,993 33,704 35,718 29,315 % terminal 42.96 43.54 42.87 42.09 % non-term. 58.33 57.60 58.35 60.01 % discont. 0.54 0.53 0.44 0.81 % reentrant 2.38 1.88 2.15 2.03 # edges 287,914 32,460 34,336 27,749 % primary 98.25 98.75 98.74 97.73 % remote 1.75 1.25 1.26 2.27 Average per non-terminal node # children 1.67 1.68 1.66 1.61 Table 1: Statistics of the Wiki and 20K Leagues UCCA corpora. All counts exclude the root node, implicit nodes, and linkage nodes and edges. UCCA edges can cross sentence boundaries, we adhere to the common practice in semantic parsing and train our parsers on individual sentences, discarding inter-relations between them (0.18% of the edges). We also discard linkage nodes and edges (as they often express inter-sentence relations and are thus mostly redundant when applied at the sentence level) as well as implicit nodes.7 In the out-of-domain experiments, we apply the same parsers (trained on the Wiki training set) to the 20K Leagues corpus without parameter re-tuning. Implementation. We use the DyNet package (Neubig et al., 2017) for implementing the NN classifiers. Unless otherwise noted, we use the default values provided by the package. See Appendix C for the hyperparameter values we found by tuning on the development set. Evaluation. We define a simple measure for comparing UCCA structures Gp = (Vp, Ep, ℓp) and Gg = (Vg, Eg, ℓg), the predicted and goldstandard graphs, respectively, over the same sequence of terminals W = {w1, . . . , wn}. For an edge e = (u, v) in either graph, u being the parent and v the child, its yield y(e) ⊆W is the set of terminals in W that are descendants of v. Define the set of mutual edges between Gp and Gg: M(Gp, Gg) = {(e1, e2) ∈Ep × Eg | y(e1) = y(e2) ∧ℓp(e1) = ℓg(e2)} Labeled precision and recall are defined by dividing |M(Gp, Gg)| by |Ep| and |Eg|, respectively, and F-score by taking their harmonic mean. 7Appendix B further discusses linkage and implicit units. 1131 After graduation , John moved to Paris L U A A H R A John gave everything up A A C John and Mary went home A N C A Figure 4: Bilexical graph approximation (dependency graph) for the sentences in Figure 1. We report two variants of this measure: one where we consider only primary edges, and another for remote edges (see Section 2). Performance on remote edges is of pivotal importance in this investigation, which focuses on extending the class of graphs supported by statistical parsers. We note that the measure collapses to the standard PARSEVAL constituency evaluation measure if Gp and Gg are trees. Punctuation is excluded from the evaluation, but not from the datasets. Comparison to bilexical graph parsers. As no direct comparison with existing parsers is possible, we compare TUPA to bilexical dependency graph parsers, which support reentrancy and discontinuity but not non-terminal nodes. To facilitate the comparison, we convert our training set into bilexical graphs (see examples in Figure 4), train each of the parsers, and evaluate them by applying them to the test set and then reconstructing UCCA graphs, which are compared with the gold standard. The conversion to bilexical graphs is done by heuristically selecting a head terminal for each non-terminal node, and attaching all terminal descendents to the head terminal. In the inverse conversion, we traverse the bilexical graph in topological order, creating non-terminal parents for all terminals, and attaching them to the previously-created non-terminals corresponding to the bilexical heads.8 In Section 5 we report the upper bounds on the achievable scores due to the error resulting from the removal of non-terminal nodes. Comparison to tree parsers. For completeness, and as parsing technology is considerably more 8See Appendix D for a detailed description of the conversion procedures. After L graduation P H , U John A moved P to R Paris C A H After graduation , John moved to Paris L U A H R A Figure 5: Tree approximation (constituency) for the sentence in Figure 1a (top), and bilexical tree approximation (dependency) for the same sentence (bottom). These are identical to the original graphs, apart from the removal of remote edges. mature for tree (rather than graph) parsing, we also perform a tree approximation experiment, converting UCCA to (bilexical) trees and evaluating constituency and dependency tree parsers on them (see examples in Figure 5). Our approach is similar to the tree approximation approach used for dependency graph parsing (Agi´c et al., 2015; Fern´andez-Gonz´alez and Martins, 2015), where dependency graphs were converted into dependency trees and then parsed by dependency tree parsers. In our setting, the conversion to trees consists simply of removing remote edges from the graph, and then to bilexical trees by applying the same procedure as for bilexical graphs. Baseline parsers. We evaluate two bilexical graph semantic dependency parsers: DAGParser (Ribeyre et al., 2014), the leading transition-based parser in SemEval 2014 (Oepen et al., 2014) and TurboParser (Almeida and Martins, 2015), a graph-based parser from SemEval 2015 (Oepen et al., 2015); UPARSE (Maier and Lichte, 2016), a transition-based constituency parser supporting discontinuous constituents; and two bilexical tree parsers: MaltParser (Nivre et al., 2007), and the stack LSTM-based parser of Dyer et al. (2015, henceforce “LSTM Parser”). Default settings are used in all cases.9 DAGParser and UPARSE use beam search by default, with a beam size of 5 and 4 respectively. The other parsers are greedy. 5 Results Table 2 presents our main experimental results, as well as upper bounds for the baseline parsers, re9For MaltParser we use the ARCEAGER transition set and SVM classifier. Other configurations yielded lower scores. 1132 Wiki (in-domain) 20K Leagues (out-of-domain) Primary Remote Primary Remote LP LR LF LP LR LF LP LR LF LP LR LF TUPASparse 64.5 63.7 64.1 19.8 13.4 16 59.6 59.9 59.8 22.2 7.7 11.5 TUPAMLP 65.2 64.6 64.9 23.7 13.2 16.9 62.3 62.6 62.5 20.9 6.3 9.7 TUPABiLSTM 74.4 72.7 73.5 47.4 51.6 49.4 68.7 68.5 68.6 38.6 18.8 25.3 Bilexical Approximation (Dependency DAG Parsers) Upper Bound 91 58.3 91.3 43.4 DAGParser 61.8 55.8 58.6 9.5 0.5 1 56.4 50.6 53.4 – 0 0 TurboParser 57.7 46 51.2 77.8 1.8 3.7 50.3 37.7 43.1 100 0.4 0.8 Tree Approximation (Constituency Tree Parser) Upper Bound 100 – 100 – UPARSE 60.9 61.2 61.1 – – – 52.7 52.8 52.8 – – – Bilexical Tree Approximation (Dependency Tree Parsers) Upper Bound 91 – 91.3 – MaltParser 62.8 57.7 60.2 – – – 57.8 53 55.3 – – – LSTM Parser 73.2 66.9 69.9 – – – 66.1 61.1 63.5 – – – Table 2: Experimental results, in percents, on the Wiki test set (left) and the 20K Leagues set (right). Columns correspond to labeled precision, recall and F-score, for both primary and remote edges. F-score upper bounds are reported for the conversions. For the tree approximation experiments, only primary edges scores are reported, as they are unable to predict remote edges. TUPABiLSTM obtains the highest F-scores in all metrics, surpassing the bilexical parsers, tree parsers and other classifiers. flecting the error resulting from the conversion.10 DAGParser and UPARSE are most directly comparable to TUPASparse, as they also use a perceptron classifier with sparse features. TUPASparse considerably outperforms both, where DAGParser does not predict any remote edges in the out-ofdomain setting. TurboParser fares worse in this comparison, despite somewhat better results on remote edges. The LSTM parser of Dyer et al. (2015) obtains the highest primary F-score among the baseline parsers, with a considerable margin. Using a feedforward NN and embedding features, TUPAMLP obtains higher scores than TUPASparse, but is outperformed by the LSTM parser on primary edges. However, using better input encoding allowing virtual look-ahead and look-behind in the token representation, TUPABiLSTM obtains substantially higher scores than TUPAMLP and all other parsers, on both primary and remote edges, both in the in-domain and out-of-domain settings. Its performance in absolute terms, of 73.5% F-score on primary edges, is encouraging in light of UCCA’s inter-annotator agreement of 80–85% F-score on them (Abend and Rappoport, 2013). The parsers resulting from tree approximation 10The low upper bound for remote edges is partly due to the removal of implicit nodes (not supported in bilexical representations), where the whole sub-graph headed by such nodes, often containing remote edges, must be discarded. are unable to recover any remote edges, as these are removed in the conversion.11 The bilexical DAG parsers are quite limited in this respect as well. While some of the DAG parsers’ difficulty can be attributed to the conversion upper bound of 58.3%, this in itself cannot account for their poor performance on remote edges, which is an order of magnitude lower than that of TUPABiLSTM. 6 Related Work While earlier work on anchored12 semantic parsing has mostly concentrated on shallow semantic analysis, focusing on semantic role labeling of verbal argument structures, the focus has recently shifted to parsing of more elaborate representations that account for a wider range of phenomena (Abend and Rappoport, 2017). Grammar-Based Parsing. Linguistically expressive grammars such as HPSG (Pollard and Sag, 1994), CCG (Steedman, 2000) and TAG (Joshi and Schabes, 1997) provide a theory of the syntax-semantics interface, and have been used as a basis for semantic parsers by defining com11We also experimented with a simpler version of TUPA lacking REMOTE transitions, obtaining an increase of up to 2 labeled F-score points on primary edges, at the cost of not being able to predict remote edges. 12By anchored we mean that the semantic representation directly corresponds to the words and phrases of the text. 1133 positional semantics on top of them (Flickinger, 2000; Bos, 2005, among others). Depending on the grammar and the implementation, such semantic parsers can support some or all of the structural properties UCCA exhibits. Nevertheless, this line of work differs from our approach in two important ways. First, the representations are different. UCCA does not attempt to model the syntaxsemantics interface and is thus less coupled with syntax. Second, while grammar-based parsers explicitly model syntax, our approach directly models the relation between tokens and semantic structures, without explicit composition rules. Broad-Coverage Semantic Parsing. Most closely related to this work is Broad-Coverage Semantic Dependency Parsing (SDP), addressed in two SemEval tasks (Oepen et al., 2014, 2015). Like UCCA parsing, SDP addresses a wide range of semantic phenomena, and supports discontinuous units and reentrancy. In SDP, however, bilexical dependencies are used, and a head must be selected for every relation—even in constructions that have no clear head, such as coordination (Ivanova et al., 2012). The use of non-terminal nodes is a simple way to avoid this liability. SDP also differs from UCCA in the type of distinctions it makes, which are more tightly coupled with syntactic considerations, where UCCA aims to capture purely semantic cross-linguistically applicable notions. For instance, the “poss” label in the DM target representation is used to annotate syntactic possessive constructions, regardless of whether they correspond to semantic ownership (e.g., “John’s dog”) or other semantic relations, such as marking an argument of a nominal predicate (e.g., “John’s kick”). UCCA reflects the difference between these constructions. Recent interest in SDP has yielded numerous works on graph parsing (Ribeyre et al., 2014; Thomson et al., 2014; Almeida and Martins, 2015; Du et al., 2015), including tree approximation (Agi´c and Koller, 2014; Schluter et al., 2014) and joint syntactic/semantic parsing (Henderson et al., 2013; Swayamdipta et al., 2016). Abstract Meaning Representation. Another line of work addresses parsing into AMRs (Flanigan et al., 2014; Vanderwende et al., 2015; Pust et al., 2015; Artzi et al., 2015), which, like UCCA, abstract away from syntactic distinctions and represent meaning directly, using OntoNotes predicates (Weischedel et al., 2013). Events in AMR may also be evoked by non-verbal predicates, including possessive constructions. Unlike in UCCA, the alignment between AMR concepts and the text is not explicitly marked. While sharing much of this work’s motivation, not anchoring the representation in the text complicates the parsing task, as it requires the alignment to be automatically (and imprecisely) detected. Indeed, despite considerable technical effort (Flanigan et al., 2014; Pourdamghani et al., 2014; Werling et al., 2015), concept identification is only about 80%–90% accurate. Furthermore, anchoring allows breaking down sentences into semantically meaningful sub-spans, which is useful for many applications (Fern´andez-Gonz´alez and Martins, 2015; Birch et al., 2016). Several transition-based AMR parsers have been proposed: CAMR assumes syntactically parsed input, processing dependency trees into AMR (Wang et al., 2015a,b, 2016; Goodman et al., 2016). In contrast, the parsers of Damonte et al. (2017) and Zhou et al. (2016) do not require syntactic pre-processing. Damonte et al. (2017) perform concept identification using a simple heuristic selecting the most frequent graph for each token, and Zhou et al. (2016) perform concept identification and parsing jointly. UCCA parsing does not require separately aligning the input tokens to the graph. TUPA creates non-terminal units as part of the parsing process. Furthermore, existing transition-based AMR parsers are not general DAG parsers. They are only able to predict a subset of reentrancies and discontinuities, as they may remove nodes before their parents have been predicted (Damonte et al., 2017). They are thus limited to a sub-class of AMRs in particular, and specifically cannot produce arbitrary DAG parses. TUPA’s transition set, on the other hand, allows general DAG parsing.13 7 Conclusion We present TUPA, the first parser for UCCA. Evaluated in in-domain and out-of-domain settings, we show that coupled with a NN classifier and BiLSTM feature extractor, it accurately predicts UCCA graphs from text, outperforming a variety of strong baselines by a margin. Despite the recent diversity of semantic pars13See Appendix E for a proof sketch for the completeness of TUPA’s transition set. 1134 ing work, the effectiveness of different approaches for structurally and semantically different schemes is not well-understood (Kuhlmann and Oepen, 2016). Our contribution to this literature is a general parser that supports multiple parents, discontinuous units and non-terminal nodes. Future work will evaluate TUPA in a multilingual setting, assessing UCCA’s cross-linguistic applicability. We will also apply the TUPA transition scheme to different target representations, including AMR and SDP, exploring the limits of its generality. In addition, we will explore different conversion procedures (Kong et al., 2015) to compare different representations, suggesting ways for a data-driven design of semantic annotation. A parser for UCCA will enable using the framework for new tasks, in addition to existing applications such as machine translation evaluation (Birch et al., 2016). We believe UCCA’s merits in providing a cross-linguistically applicable, broadcoverage annotation will support ongoing efforts to incorporate deeper semantic structures into various applications, such as sentence simplification (Narayan and Gardent, 2014) and summarization (Liu et al., 2015). Acknowledgments This work was supported by the HUJI Cyber Security Research Center in conjunction with the Israel National Cyber Bureau in the Prime Minister’s Office, and by the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI). The first author was supported by a fellowship from the Edmond and Lily Safra Center for Brain Sciences. We thank Wolfgang Maier, Nathan Schneider, Elior Sulem and the anonymous reviewers for their helpful comments. References Omri Abend and Ari Rappoport. 2013. Universal Conceptual Cognitive Annotation (UCCA). In Proc. of ACL. pages 228–238. http://aclweb.org/anthology/P13-1023. Omri Abend and Ari Rappoport. 2017. The state of the art in semantic representation. In Proc. of ACL. To appear. Omri Abend, Shai Yerushalmi, and Ari Rappoport. 2017. UCCAApp: Web-application for syntactic and semantic phrase-based annotation. In Proc. of ACL: System Demonstration Papers. To appear. ˇZeljko Agi´c and Alexander Koller. 2014. Potsdam: Semantic dependency parsing by bidirectional graph-tree transformations and syntactic parsing. In Proc. of SemEval. pages 465–470. http://aclweb.org/anthology/S14-2081. ˇZeljko Agi´c, Alexander Koller, and Stephan Oepen. 2015. Semantic dependency graph parsing using tree approximations. In Proc. of IWCS. pages 217– 227. http://aclweb.org/anthology/W15-0126. Mariana S. C. Almeida and Andr´e F. T. Martins. 2015. Lisbon: Evaluating TurboSemanticParser on multiple languages and out-of-domain data. In Proc. of SemEval. pages 970–973. http://aclweb.org/anthology/S15-2162. Bharat Ram Ambati, Tejaswini Deoskar, Mark Johnson, and Mark Steedman. 2015. An incremental algorithm for transition-based CCG parsing. In Proc. of NAACL. pages 53–63. http://aclweb.org/anthology/N15-1006. Bharat Ram Ambati, Tejaswini Deoskar, and Mark Steedman. 2016. Shift-reduce CCG parsing using neural network models. In Proc. of NAACL-HLT. pages 447–453. http://aclweb.org/anthology/N161052. Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. In Proc. of ACL. pages 2442–2452. http://aclweb.org/anthology/P16-1231. Yoav Artzi, Kenton Lee, and Luke Zettlemoyer. 2015. Broad-coverage CCG semantic parsing with AMR. In Proc. of EMNLP. pages 1699–1710. http://aclweb.org/anthology/D15-1198. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Martha Palmer, and Nathan Schneider. 2013. Abstract Meaning Representation for sembanking. In Proc. of the Linguistic Annotation Workshop. http://aclweb.org/anthology/W13-2322. Alexandra Birch, Omri Abend, Ondˇrej Bojar, and Barry Haddow. 2016. HUME: Human UCCA-based evaluation of machine translation. In Proc. of EMNLP. pages 1264–1274. http://aclweb.org/anthology/D16-1134. Johan Bos. 2005. Towards wide-coverage semantic interpretation. In Proc. of IWCS. volume 6, pages 42–53. http://www.let.rug.nl/bos/pubs/Bos2005IWCS.pdf. Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proc. of EMNLP. pages 740–750. http://aclweb.org/anthology/D14-1082. 1135 Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proc. of ACL. pages 111–118. http://aclweb.org/anthology/P04-1015. William Croft and D Alan Cruse. 2004. Cognitive linguistics. Cambridge University Press. Marco Damonte, Shay B. Cohen, and Giorgio Satta. 2017. An incremental parser for abstract meaning representation. In Proceedings of EACL. http://homepages.inf.ed.ac.uk/scohen/eacl17amr.pdf. Robert M. W. Dixon. 2010a. Basic Linguistic Theory: Grammatical Topics, volume 2. Oxford University Press. Robert M. W. Dixon. 2010b. Basic Linguistic Theory: Methodology, volume 1. Oxford University Press. Robert M. W. Dixon. 2012. Basic Linguistic Theory: Further Grammatical Topics, volume 3. Oxford University Press. Yantao Du, Fan Zhang, Xun Zhang, Weiwei Sun, and Xiaojun Wan. 2015. Peking: Building semantic dependency graphs with a hybrid parser. In Proc. of SemEval. pages 927–931. http://aclweb.org/anthology/S15-2154. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependeny parsing with stack long shortterm memory. In Proc. of ACL. pages 334–343. http://aclweb.org/anthology/P15-1033. Daniel Fern´andez-Gonz´alez and Andr´e FT Martins. 2015. Parsing as reduction. In Proc. of ACL. pages 1523–1533. http://aclweb.org/anthology/P15-1147. Jeffrey Flanigan, Sam Thomson, Jaime Carbonell, Chris Dyer, and Noah A. Smith. 2014. A discriminative graph-based parser for the abstract meaning representation. In Proc. of ACL. pages 1426–1436. http://aclweb.org/anthology/P14-1134. Daniel Flickinger. 2000. On building a more efficient grammar by exploiting types. In Collaborative Language Engineering, CLSI, Stanford, CA, volume 6, pages 15–28. Yoav Goldberg and Michael Elhadad. 2011. Learning sparser perceptron models. Technical report. http://www.cs.bgu.ac.il/˜yoavg/publications. Yoav Goldberg and Joakim Nivre. 2012. A dynamic oracle for arc-eager dependency parsing. In Proc. of COLING. pages 959–976. http://aclweb.org/anthology/C12-1059. James Goodman, Andreas Vlachos, and Jason Naradowsky. 2016. Noise reduction and targeted exploration in imitation learning for Abstract Meaning Representation parsing. In Proc. of ACL. pages 1– 11. http://aclweb.org/anthology/P16-1001. James Henderson, Paola Merlo, Ivan Titov, and Gabriele Musillo. 2013. Multilingual joint parsing of syntactic and semantic dependencies with a latent variable model. Computational Linguistics 39(4):949–998. http://cognet.mit.edu/node/27348. Matthew Honnibal and Mark Johnson. 2015. An improved non-monotonic transition system for dependency parsing. In Proc. of EMNLP. pages 1373– 1378. http://aclweb.org/anthology/D15-1162. Angelina Ivanova, Stephan Oepen, Lilja Øvrelid, and Dan Flickinger. 2012. Who did what to whom? A contrastive study of syntacto-semantic dependencies. In Proc. of LAW. pages 2–11. http://aclweb.org/anthology/W12-3602. Aravind Joshi and Yves Schabes. 1997. TreeAdjoining Grammars. In Grzegorz Rozenberg and Arto Salomaa, editors, Handbook of Formal Languages, Springer, Berlin, volume 3, pages 69–124. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR abs/1412.6980. http://arxiv.org/abs/1412.6980. Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional LSTM feature representations. TACL 4:313–327. https://transacl.org/ojs/index.php/tacl/article/view/885. Lingpeng Kong, Alexander M. Rush, and Noah A. Smith. 2015. Transforming dependencies into phrase structures. In Proc. of NAACL HLT. https://aclweb.org/anthology/N15-1080. Marco Kuhlmann and Stephan Oepen. 2016. Towards a catalogue of linguistic graph banks. Computational Linguistics https://mn.uio.no/ifi/english/people/aca/oe/cl.pdf. Fei Liu, Jeffrey Flanigan, Sam Thomson, Norman Sadeh, and Noah A. Smith. 2015. Toward abstractive summarization using semantic representations. In Proc. of NAACL. pages 1077–1086. http://aclweb.org/anthology/N15-1114. Wolfgang Maier. 2015. Discontinuous incremental shift-reduce parsing. In Proc. of ACL. pages 1202– 1212. http://aclweb.org/anthology/P15-1116. Wolfgang Maier and Timm Lichte. 2016. Discontinuous parsing with continuous trees. In Proc. of Workshop on Discontinuous Structures in NLP. pages 47– 57. http://aclweb.org/anthology/W16-0906. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. CoRR abs/1301.3781. https://arxiv.org/pdf/1301.3781. Dipendra K Misra and Yoav Artzi. 2016. Neural shift-reduce CCG semantic parsing. In Proc. of EMNLP. pages 1775–1786. http://aclweb.org/anthology/D16-1183. 1136 Shashi Narayan and Claire Gardent. 2014. Hybrid simplification using deep semantics and machine translation. In Proc. of ACL. pages 435–445. http://aclweb.org/anthology/P14-1041. Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, and Pengcheng Yin. 2017. DyNet: The dynamic neural network toolkit. arXiv preprint arXiv:1701.03980 https://arxiv.org/abs/1701.03980. Joakim Nivre. 2003. An efficient algorithm for projective dependency parsing. In Proc. of IWPT. pages 149–160. http://aclweb.org/anthology/W06-2933. Joakim Nivre. 2009. Non-projective dependency parsing in expected linear time. In Proc. of ACL. pages 351–359. http://aclweb.org/anthology/P09-1040. Joakim Nivre, Johan Hall, Jens Nilsson, Atanas Chanev, G¨ulsen Eryigit, Sandra K¨ubler, Svetoslav Marinov, and Erwin Marsi. 2007. MaltParser: A language-independent system for data-driven dependency parsing. Natural Language Engineering 13(02):95–135. Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Silvie Cinkov´a, Dan Flickinger, Jan Hajiˇc, and Zdeˇnka Ureˇsov´a. 2015. SemEval 2015 task 18: Broad-coverage semantic dependency parsing. In Proc. of SemEval. pages 915–926. http://aclweb.org/anthology/S15-2153. Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Dan Flickinger, Jan Hajiˇc, Angelina Ivanova, and Yi Zhang. 2014. SemEval 2014 task 8: Broad-coverage semantic dependency parsing. In Proc. of SemEval. pages 63–72. http://aclweb.org/anthology/S14-2008. Carl Pollard and Ivan Sag. 1994. Head Driven Phrase Structure Grammar. CSLI Publications, Stanford, CA. Nima Pourdamghani, Yang Gao, Ulf Hermjakob, and Kevin Knight. 2014. Aligning English strings with abstract meaning representation graphs. In Proc. of EMNLP. pages 425–429. http://aclweb.org/anthology/D14-1048. Michael Pust, Ulf Hermjakob, Kevin Knight, Daniel Marcu, and Jonathan May. 2015. Parsing English into abstract meaning representation using syntax-based machine translation. In Proc. of EMNLP. pages 1143–1154. http://aclweb.org/anthology/D15-1136. Corentin Ribeyre, Eric Villemonte de la Clergerie, and Djam´e Seddah. 2014. Alpage: Transitionbased semantic graph parsing with syntactic features. In Proc. of SemEval. pages 97–103. http://aclweb.org/anthology/S14-2012. Kenji Sagae and Alon Lavie. 2005. A classifierbased parser with linear run-time complexity. In Proc. of IWPT. pages 125–132. http://aclweb.org/anthology/W05-1513. Kenji Sagae and Jun’ichi Tsujii. 2008. Shift-reduce dependency DAG parsing. In Proc. of COLING. pages 753–760. http://aclweb.org/anthology/C08-1095. Natalie Schluter, Anders Søgaard, Jakob Elming, Dirk Hovy, Barbara Plank, H´ector Mart´ınez Alonso, Anders Johanssen, and Sigrid Klerke. 2014. Copenhagen-Malm¨o: Tree approximations of semantic parsing problems. In Proc. of SemEval. pages 213–217. http://aclweb.org/anthology/S142034. Nathan Schneider, Emily Danchik, Chris Dyer, and Noah A Smith. 2014. Discriminative lexical semantic segmentation with gaps: running the MWE gamut. TACL 2:193–206. http://aclweb.org/anthology/Q14-1016.pdf. Mark Steedman. 2000. The Syntactic Process. MIT Press, Cambridge, MA. Elior Sulem, Omri Abend, and Ari Rappoport. 2015. Conceptual annotations preserve structure across translations: A French-English case study. In Proc. of S2MT. pages 11–22. http://aclweb.org/anthology/W15-3502. Swabha Swayamdipta, Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2016. Greedy, joint syntactic-semantic parsing with stack LSTMs. In Proc. of CoNLL. pages 187–197. http://aclweb.org/anthology/K16-1019. Sam Thomson, Brendan O’Connor, Jeffrey Flanigan, David Bamman, Jesse Dodge, Swabha Swayamdipta, Nathan Schneider, Chris Dyer, and Noah A. Smith. 2014. CMU: Arcfactored, discriminative semantic dependency parsing. In Proc. of SemEval. pages 176–180. http://aclweb.org/anthology/S14-2027. Alper Tokg¨oz and G¨ulsen Eryi˘git. 2015. Transitionbased dependency DAG parsing using dynamic oracles. In Proc. of ACL Student Research Workshop. pages 22–27. http://aclweb.org/anthology/P153004. Lucy Vanderwende, Arul Menezes, and Chris Quirk. 2015. An AMR parser for English, French, German, Spanish and Japanese and a new AMRannotated corpus. In Proc. of NAACL. pages 26–30. http://aclweb.org/anthology/N15-3006. 1137 Chuan Wang, Sameer Pradhan, Xiaoman Pan, Heng Ji, and Nianwen Xue. 2016. CAMR at SemEval2016 task 8: An extended transition-based amr parser. In Proc. of SemEval. pages 1173–1178. http://aclweb.org/anthology/S16-1181. Chuan Wang, Nianwen Xue, and Sameer Pradhan. 2015a. Boosting transition-based AMR parsing with refined actions and auxiliary analyzers. In Proc. of ACL. pages 857–862. http://aclweb.org/anthology/P15-2141. Chuan Wang, Nianwen Xue, and Sameer Pradhan. 2015b. A transition-based algorithm for AMR parsing. In Proc. of NAACL. pages 366–375. http://aclweb.org/anthology/N15-1040. Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, et al. 2013. OntoNotes release 5.0 LDC2013T19. Linguistic Data Consortium, Philadelphia, PA https://catalog.ldc.upenn.edu/LDC2013T19. Keenon Werling, Gabor Angeli, and Christopher D. Manning. 2015. Robust subgraph generation improves abstract meaning representation parsing. In Proc. of ACL. pages 982–991. http://aclweb.org/anthology/P15-1095. Yue Zhang and Stephen Clark. 2009. Transition-based parsing of the Chinese treebank using a global discriminative model. In Proc. of IWPT. Association for Computational Linguistics, pages 162–171. http://aclweb.org/anthology/W09-3825. Yue Zhang and Stephen Clark. 2011. Shift-reduce CCG parsing. In Proc. of ACL. pages 683–692. http://aclweb.org/anthology/P11-1069. Junsheng Zhou, Feiyu Xu, Hans Uszkoreit, Weiguang Qu, Ran Li, and Yanhui Gu. 2016. AMR parsing with an incremental joint model. In Proc. of EMNLP. pages 680–689. http://aclweb.org/anthology/D16-1065. Muhua Zhu, Yue Zhang, Wenliang Chen, Min Zhang, and Jingbo Zhu. 2013. Fast and accurate shiftreduce constituent parsing. In Proc. of ACL. pages 434–443. http://aclweb.org/anthology/P13-1043. 1138
2017
104
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1139–1149 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1105 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1139–1149 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1105 Abstract Syntax Networks for Code Generation and Semantic Parsing Maxim Rabinovich∗ Mitchell Stern∗ Dan Klein Computer Science Division University of California, Berkeley {rabinovich,mitchell,klein}@cs.berkeley.edu Abstract Tasks like code generation and semantic parsing require mapping unstructured (or partially structured) inputs to well-formed, executable outputs. We introduce abstract syntax networks, a modeling framework for these problems. The outputs are represented as abstract syntax trees (ASTs) and constructed by a decoder with a dynamically-determined modular structure paralleling the structure of the output tree. On the benchmark HEARTHSTONE dataset for code generation, our model obtains 79.2 BLEU and 22.7% exact match accuracy, compared to previous state-ofthe-art values of 67.1 and 6.1%. Furthermore, we perform competitively on the ATIS, JOBS, and GEO semantic parsing datasets with no task-specific engineering. 1 Introduction Tasks like semantic parsing and code generation are challenging in part because they are structured (the output must be well-formed) but not synchronous (the output structure diverges from the input structure). Sequence-to-sequence models have proven effective for both tasks (Dong and Lapata, 2016; Ling et al., 2016), using encoder-decoder frameworks to exploit the sequential structure on both the input and output side. Yet these approaches do not account for much richer structural constraints on outputs—including well-formedness, well-typedness, and executability. The wellformedness case is of particular interest, since it can readily be enforced by representing outputs as abstract syntax trees (ASTs) (Aho et al., 2006), an approach that can be seen as a much lighter weight ∗Equal contribution. name: [ ’D’, ’i’, ’r’, ’e’, ’ ’, ’W’, ’o’, ’l’, ’f’, ’ ’, ’A’, ’l’, ’p’, ’h’, ’a’] cost: [’2’] type: [’Minion’] rarity: [’Common’] race: [’Beast’] class: [’Neutral’] description: [ ’Adjacent’, ’minions’, ’have’, ’+’, ’1’, ’Attack’, ’.’] health: [’2’] attack: [’2’] durability: [’-1’] class DireWolfAlpha(MinionCard): def __init__(self): super().__init__( "Dire Wolf Alpha", 2, CHARACTER_CLASS.ALL, CARD_RARITY.COMMON, minion_type=MINION_TYPE.BEAST) def create_minion(self, player): return Minion(2, 2, auras=[ Aura(ChangeAttack(1), MinionSelector(Adjacent())) ]) Figure 1: Example code for the “Dire Wolf Alpha” Hearthstone card. show me the fare from ci0 to ci1 lambda $0 e ( exists $1 ( and ( from $1 ci0 ) ( to $1 ci1 ) ( = ( fare $1 ) $0 ) ) ) Figure 2: Example of a query and its logical form from the ATIS dataset. The ci0 and ci1 tokens are entity abstractions introduced in preprocessing (Dong and Lapata, 2016). version of CCG-based semantic parsing (Zettlemoyer and Collins, 2005). In this work, we introduce abstract syntax networks (ASNs), an extension of the standard encoder-decoder framework utilizing a modular decoder whose submodels are composed to natively generate ASTs in a top-down manner. The decoding process for any given input follows a dy1139 namically chosen mutual recursion between the modules, where the structure of the tree being produced mirrors the call graph of the recursion. We implement this process using a decoder model built of many submodels, each associated with a specific construct in the AST grammar and invoked when that construct is needed in the output tree. As is common with neural approaches to structured prediction (Chen and Manning, 2014; Vinyals et al., 2015), our decoder proceeds greedily and accesses not only a fixed encoding but also an attention-based representation of the input (Bahdanau et al., 2014). Our model significantly outperforms previous architectures for code generation and obtains competitive or state-of-the-art results on a suite of semantic parsing benchmarks. On the HEARTHSTONE dataset for code generation, we achieve a token BLEU score of 79.2 and an exact match accuracy of 22.7%, greatly improving over the previous best results of 67.1 BLEU and 6.1% exact match (Ling et al., 2016). The flexibility of ASNs makes them readily applicable to other tasks with minimal adaptation. We illustrate this point with a suite of semantic parsing experiments. On the JOBS dataset, we improve on previous state-of-the-art, achieving 92.9% exact match accuracy as compared to the previous record of 90.7%. Likewise, we perform competitively on the ATIS and GEO datasets, matching or exceeding the exact match reported by Dong and Lapata (2016), though not quite reaching the records held by the best previous semantic parsing approaches (Wang et al., 2014). 1.1 Related work Encoder-decoder architectures, with and without attention, have been applied successfully both to sequence prediction tasks like machine translation and to tree prediction tasks like constituency parsing (Cross and Huang, 2016; Dyer et al., 2016; Vinyals et al., 2015). In the latter case, work has focused on making the task look like sequence-tosequence prediction, either by flattening the output tree (Vinyals et al., 2015) or by representing it as a sequence of construction decisions (Cross and Huang, 2016; Dyer et al., 2016). Our work differs from both in its use of a recursive top-down generation procedure. Dong and Lapata (2016) introduced a sequenceto-sequence approach to semantic parsing, including a limited form of top-down recursion, but without the modularity or tight coupling between output grammar and model characteristic of our approach. Neural (and probabilistic) modeling of code, including for prediction problems, has a longer history. Allamanis et al. (2015) and Maddison and Tarlow (2014) proposed modeling code with a neural language model, generating concrete syntax trees in left-first depth-first order, focusing on metrics like perplexity and applications like code snippet retrieval. More recently, Shin et al. (2017) attacked the same problem using a grammar-based variational autoencoder with top-down generation similar to ours instead. Meanwhile, a separate line of work has focused on the problem of program induction from input-output pairs (Balog et al., 2016; Liang et al., 2010; Menon et al., 2013). The prediction framework most similar in spirit to ours is the doubly-recurrent decoder network introduced by Alvarez-Melis and Jaakkola (2017), which propagates information down the tree using a vertical LSTM and between siblings using a horizontal LSTM. Our model differs from theirs in using a separate module for each grammar construct and learning separate vertical updates for siblings when the AST labels require all siblings to be jointly present; we do, however, use a horizontal LSTM for nodes with variable numbers of children. The differences between our models reflect not only design decisions, but also differences in data—since ASTs have labeled nodes and labeled edges, they come with additional structure that our model exploits. Apart from ours, the best results on the codegeneration task associated with the HEARTHSTONE dataset are based on a sequence-tosequence approach to the problem (Ling et al., 2016). Abstract syntax networks greatly improve on those results. Previously, Andreas et al. (2016) introduced neural module networks (NMNs) for visual question answering, with modules corresponding to linguistic substructures within the input query. The primary purpose of the modules in NMNs is to compute deep features of images in the style of convolutional neural networks (CNN). These features are then fed into a final decision layer. In contrast to the modules we describe here, NMN modules do not make decisions about what to generate or which modules to call next, nor do they 1140 ClassDef identifier Name identifier FunctionDef FunctionDef “DireWolfAlpha” “MinionCard” identifier “__init__” identifier “create_minion” ... name bases body ... (a) The root portion of the AST. Call identifier “Aura” Name Call Call identifier “ChangeAttack” Name identifier “MinionSelector” Name object 1 Num Call identifier “Adjacent” Name func func func args args args func args (b) Excerpt from the same AST, corresponding to the code snippet Aura(ChangeAttack(1),MinionSelector(Adjacent())). Figure 3: Fragments from the abstract syntax tree corresponding to the example code in Figure 1. Blue boxes represent composite nodes, which expand via a constructor with a prescribed set of named children. Orange boxes represent primitive nodes, with their corresponding values written underneath. Solid black squares correspond to constructor fields with sequential cardinality, such as the body of a class definition (Figure 3a) or the arguments of a function call (Figure 3b). maintain recurrent state. 2 Data Representation 2.1 Abstract Syntax Trees Our model makes use of the Abstract Syntax Description Language (ASDL) framework (Wang et al., 1997), which represents code fragments as trees with typed nodes. Primitive types correspond to atomic values, like integers or identifiers. Accordingly, primitive nodes are annotated with a primitive type and a value of that type—for instance, in Figure 3a, the identifier node storing "create minion" represents a function of the same name. Composite types correspond to language constructs, like expressions or statements. Each type has a collection of constructors, each of which specifies the particular language construct a node of that type represents. Figure 4 shows constructors for the statement (stmt) and expression (expr) types. The associated language constructs include function and class definitions, return statements, binary operations, and function calls. Composite types enter syntax trees via composite nodes, annotated with a composite type and a choice of constructor specifying how the node expands. The root node in Figure 3a, for example, is 1The full grammar can be found online on the documentation page for the Python ast module: https://docs.python.org/3/library/ast. html#abstract-grammar primitive types: identifier, object, ... stmt = FunctionDef( identifier name, arg* args, stmt* body) | ClassDef( identifier name, expr* bases, stmt* body) | Return(expr? value) | ... expr = BinOp(expr left, operator op, expr right) | Call(expr func, expr* args) | Str(string s) | Name(identifier id, expr_context ctx) | ... ... Figure 4: A simplified fragment of the Python ASDL grammar.1 a composite node of type stmt that represents a class definition and therefore uses the ClassDef constructor. In Figure 3b, on the other hand, the root uses the Call constructor because it represents a function call. Children are specified by named and typed fields of the constructor, which have cardinalities of singular, optional, or sequential. By default, fields have singular cardinality, meaning they correspond to exactly one child. For instance, the ClassDef constructor has a singular name field of type identifier. Fields of optional cardinality are associ1141 ated with zero or one children, while fields of sequential cardinality are associated with zero or more children—these are designated using ? and * suffixes in the grammar, respectively. Fields of sequential cardinality are often used to represent statement blocks, as in the body field of the ClassDef and FunctionDef constructors. The grammars needed for semantic parsing can easily be given ASDL specifications as well, using primitive types to represent variables, predicates, and atoms and composite types for standard logical building blocks like lambdas and counting (among others). Figure 2 shows what the resulting λ-calculus trees look like. The ASDL grammars for both λ-calculus and Prolog-style logical forms are quite compact, as Figures 9 and 10 in the appendix show. 2.2 Input Representation We represent inputs as collections of named components, each of which consists of a sequence of tokens. In the case of semantic parsing, inputs have a single component containing the query sentence. In the case of HEARTHSTONE, the card’s name and description are represented as sequences of characters and tokens, respectively, while categorical attributes are represented as single-token sequences. For HEARTHSTONE, we restrict our input and output vocabularies to values that occur more than once in the training set. 3 Model Architecture Our model uses an encoder-decoder architecture with hierarchical attention. The key idea behind our approach is to structure the decoder as a collection of mutually recursive modules. The modules correspond to elements of the AST grammar and are composed together in a manner that mirrors the structure of the tree being generated. A vertical LSTM state is passed from module to module to propagate information during the decoding process. The encoder uses bidirectional LSTMs to embed each component and a feedforward network to combine them. Component- and token-level attention is applied over the input at each step of the decoding process. We train our model using negative log likelihood as the loss function. The likelihood encompasses terms for all generation decisions made by the decoder. 3.1 Encoder Each component c of the input is encoded using a component-specific bidirectional LSTM. This results in forward and backward token encodings (−→ hc, ←− hc) that are later used by the attention mechanism. To obtain an encoding of the input as a whole for decoder initialization, we concatenate the final forward and backward encodings of each component into a single vector and apply a linear projection. 3.2 Decoder Modules The decoder decomposes into several classes of modules, one per construct in the grammar, which we discuss in turn. Throughout, we let v denote the current vertical LSTM state, and use f to represent a generic feedforward neural network. LSTM updates with hidden state h and input x are notated as LSTM(h, x). Composite type modules Each composite type T has a corresponding module whose role is to select among the constructors C for that type. As Figure 5a exhibits, a composite type module receives a vertical LSTM state v as input and applies a feedforward network fT and a softmax output layer to choose a constructor: p (C | T, v) =  softmax (fT (v))  C. Control is then passed to the module associated with constructor C. Constructor modules Each constructor C has a corresponding module whose role is to compute an intermediate vertical LSTM state vu,F for each of its fields F whenever C is chosen at a composite node u. For each field F of the constructor, an embedding eF is concatenated with an attention-based context vector c and fed through a feedforward neural network fC to obtain a context-dependent field embedding: ˜eF = fC (eF, c) . An intermediate vertical state for the field F at composite node u is then computed as vu,F = LSTMv (vu, ˜eF) . Figure 5b illustrates the process, starting with a single vertical LSTM state and ending with one updated state per field. 1142 Assign ... stmt ClassDef Return If For While If (a) A composite type module choosing a constructor for the corresponding type. If test body orelse expr stmt* stmt* (b) A constructor module computing updated vertical LSTM states. stmt* stmt (c) A constructor field module (sequential cardinality) generating children to populate the field. At each step, the module decides whether to generate a child and continue (white circle) or stop (black circle). damage ... identifier __init__ create_minion add_buff change_attack add_buff (d) A primitive type module choosing a value from a closed list. Figure 5: The module classes constituting our decoder. For brevity, we omit the cardinality modules for singular and optional cardinalities. Constructor field modules Each field F of a constructor has a corresponding module whose role is to determine the number of children associated with that field and to propagate an updated vertical LSTM state to them. In the case of fields with singular cardinality, the decision and update are both vacuous, as exactly one child is always generated. Hence these modules forward the field vertical LSTM state vu,F unchanged to the child w corresponding to F: vw = vu,F. (1) Fields with optional cardinality can have either zero or one children; this choice is made using a feedforward network applied to the vertical LSTM state: p(zF = 1 | vu,F) = sigmoid (fgen F (vu,F)) . (2) If a child is to be generated, then as in (1), the state is propagated forward without modification. In the case of sequential fields, a horizontal LSTM is employed for both child decisions and state updates. We refer to Figure 5c for an illustration of the recurrent process. After being initialized with a transformation of the vertical state, sF,0 = WFvu,F, the horizontal LSTM iteratively decides whether to generate another child by applying a modified form of (2): p (zF,i = 1 | sF,i−1, vu,F) = sigmoid (fgen F (sF,i−1, vu,F)) . If zF,i = 0, generation stops and the process terminates, as represented by the solid black circle in Figure 5c. Otherwise, the process continues as represented by the white circle in Figure 5c. In that case, the horizontal state su,i−1 is combined with the vertical state vu,F and an attention-based context vector cF,i using a feedforward network fupdate F to obtain a joint context-dependent encoding of the field F and the position i: ˜eF,i = fupdate F (vu,F, su,i−1, cF,i). The result is used to perform a vertical LSTM update for the corresponding child wi: vwi = LSTMv(vu,F, ˜eF,i). Finally, the horizontal LSTM state is updated using the same field-position encoding, and the process continues: su,i = LSTMh(su,i−1, ˜eF,i). 1143 Primitive type modules Each primitive type T has a corresponding module whose role is to select among the values y within the domain of that type. Figure 5d presents an example of the simplest form of this selection process, where the value y is obtained from a closed list via a softmax layer applied to an incoming vertical LSTM state: p (y | T, v) =  softmax (fT (v))  y. Some string-valued types are open class, however. To deal with these, we allow generation both from a closed list of previously seen values, as in Figure 5d, and synthesis of new values. Synthesis is delegated to a character-level LSTM language model (Bengio et al., 2003), and part of the role of the primitive module for open class types is to choose whether to synthesize a new value or not. During training, we allow the model to use the character LSTM only for unknown strings but include the log probability of that binary decision in the loss in order to ensure the model learns when to generate from the character LSTM. 3.3 Decoding Process The decoding process proceeds through mutual recursion between the constituting modules, where the syntactic structure of the output tree mirrors the call graph of the generation procedure. At each step, the active decoder module either makes a generation decision, propagates state down the tree, or both. To construct a composite node of a given type, the decoder calls the appropriate composite type module to obtain a constructor and its associated module. That module is then invoked to obtain updated vertical LSTM states for each of the constructor’s fields, and the corresponding constructor field modules are invoked to advance the process to those children. This process continues downward, stopping at each primitive node, where a value is generated but no further recursion is carried out. 3.4 Attention Following standard practice for sequence-tosequence models, we compute a raw bilinear attention score qraw t for each token t in the input using the decoder’s current state x and the token’s encoding et: qraw t = e⊤ t Wx. The current state x can be either the vertical LSTM state in isolation or a concatentation of the vertical LSTM state and either a horizontal LSTM state or a character LSTM state (for string generation). Each submodule that computes attention does so using a separate matrix W. A separate attention score qcomp c is computed for each component of the input, independent of its content: qcomp c = w⊤ c x. The final token-level attention scores are the sums of the raw token-level scores and the corresponding component-level scores: qt = qraw t + qcomp c(t) , where c(t) denotes the component in which token t occurs. The attention weight vector a is then computed using a softmax: a = softmax (q) . Given the weights, the attention-based context is given by: c = X t atet. Certain decision points that require attention have been highlighted in the description above; however, in our final implementation we made attention available to the decoder at all decision points. Supervised Attention In the datasets we consider, partial or total copying of input tokens into primitive nodes is quite common. Rather than providing an explicit copying mechanism (Ling et al., 2016), we instead generate alignments where possible to define a set of tokens on which the attention at a given primitive node should be concentrated.2 If no matches are found, the corresponding set of tokens is taken to be the whole input. The attention supervision enters the loss through a term that encourages the final attention weights to be concentrated on the specified subset. Formally, if the matched subset of componenttoken pairs is S, the loss term associated with the supervision would be log X t exp (at) −log X t∈S exp (at), (3) 2Alignments are generated using an exact string match heuristic that also included some limited normalization, primarily splitting of special characters, undoing camel case, and lemmatization for the semantic parsing datasets. 1144 where at is the attention weight associated with token t, and the sum in the first term ranges over all tokens in the input. The loss in (3) can be interpreted as the negative log probability of attending to some token in S. 4 Experimental evaluation 4.1 Semantic parsing Data We use three semantic parsing datasets: JOBS, GEO, and ATIS. All three consist of natural language queries paired with a logical representation of their denotations. JOBS consists of 640 such pairs, with Prolog-style logical representations, while GEO and ATIS consist of 880 and 5,410 such pairs, respectively, with λ-calculus logical forms. We use the same training-test split as Zettlemoyer and Collins (2005) for JOBS and GEO, and the standard training-development-test split for ATIS. We use the preprocessed versions of these datasets made available by Dong and Lapata (2016), where text in the input has been lowercased and stemmed using NLTK (Bird et al., 2009), and matching entities appearing in the same input-output pair have been replaced by numbered abstract identifiers of the same type. Evaluation We compute accuracies using tree exact match for evaluation. Following the publicly released code of Dong and Lapata (2016), we canonicalize the order of the children within conjunction and disjunction nodes to avoid spurious errors, but otherwise perform no transformations before comparison. 4.2 Code generation Data We use the HEARTHSTONE dataset introduced by Ling et al. (2016), which consists of 665 cards paired with their implementations in the open-source Hearthbreaker engine.3 Our trainingdevelopment-test split is identical to that of Ling et al. (2016), with split sizes of 533, 66, and 66, respectively. Cards contain two kinds of components: textual components that contain the card’s name and a description of its function, and categorical ones that contain numerical attributes (attack, health, cost, and durability) or enumerated attributes (rarity, type, race, and class). The name of the card is represented as a sequence of characters, while 3Available online at https://github.com/ danielyule/hearthbreaker. its description consists of a sequence of tokens split on whitespace and punctuation. All categorical components are represented as single-token sequences. Evaluation For direct comparison to the results of Ling et al. (2016), we evaluate our predicted code based on exact match and token-level BLEU relative to the reference implementations from the library. We additionally compute node-based precision, recall, and F1 scores for our predicted trees compared to the reference code ASTs. Formally, these scores are obtained by defining the intersection of the predicted and gold trees as their largest common tree prefix. 4.3 Settings For each experiment, all feedforward and LSTM hidden dimensions are set to the same value. We select the dimension from {30, 40, 50, 60, 70} for the smaller JOBS and GEO datasets, or from {50, 75, 100, 125, 150} for the larger ATIS and HEARTHSTONE datasets. The dimensionality used for the inputs to the encoder is set to 100 in all cases. We apply dropout to the non-recurrent connections of the vertical and horizontal LSTMs, selecting the noise ratio from {0.2, 0.3, 0.4, 0.5}. All parameters are randomly initialized using Glorot initialization (Glorot and Bengio, 2010). We perform 200 passes over the data for the JOBS and GEO experiments, or 400 passes for the ATIS and HEARTHSTONE experiments. Early stopping based on exact match is used for the semantic parsing experiments, where performance is evaluated on the training set for JOBS and GEO or on the development set for ATIS. Parameters for the HEARTHSTONE experiments are selected based on development BLEU scores. In order to promote generalization, ties are broken in all cases with a preference toward higher dropout ratios and lower dimensionalities, in that order. Our system is implemented in Python using the DyNet neural network library (Neubig et al., 2017). We use the Adam optimizer (Kingma and Ba, 2014) with its default settings for optimization, with a batch size of 20 for the semantic parsing experiments, or a batch size of 10 for the HEARTHSTONE experiments. 4.4 Results Our results on the semantic parsing datasets are presented in Table 1. Our basic system achieves 1145 ATIS GEO JOBS System Accuracy System Accuracy System Accuracy ZH15 84.2 ZH15 88.9 ZH15 85.0 ZC07 84.6 KCAZ13 89.0 PEK03 88.0 WKZ14 91.3 WKZ14 90.4 LJK13 90.7 DL16 84.6 DL16 87.1 DL16 90.0 ASN 85.3 ASN 85.7 ASN 91.4 + SUPATT 85.9 + SUPATT 87.1 + SUPATT 92.9 Table 1: Accuracies for the semantic parsing tasks. ASN denotes our abstract syntax network framework. SUPATT refers to the supervised attention mentioned in Section 3.4. System Accuracy BLEU F1 NEAREST 3.0 65.0 65.7 LPN 6.1 67.1 – ASN 18.2 77.6 72.4 + SUPATT 22.7 79.2 75.6 Table 2: Results for the HEARTHSTONE task. SUPATT refers to the system with supervised attention mentioned in Section 3.4. LPN refers to the system of Ling et al. (2016). Our nearest neighbor baseline NEAREST follows that of Ling et al. (2016), though it performs somewhat better; its nonzero exact match number stems from spurious repetition in the data. a new state-of-the-art accuracy of 91.4% on the JOBS dataset, and this number improves to 92.9% when supervised attention is added. On the ATIS and GEO datasets, we respectively exceed and match the results of Dong and Lapata (2016). However, these fall short of the previous best results of 91.3% and 90.4%, respectively, obtained by Wang et al. (2014). This difference may be partially attributable to the use of typing information or rich lexicons in most previous semantic parsing approaches (Zettlemoyer and Collins, 2007; Kwiatkowski et al., 2013; Wang et al., 2014; Zhao and Huang, 2015). On the HEARTHSTONE dataset, we improve significantly over the initial results of Ling et al. (2016) across all evaluation metrics, as shown in Table 2. On the more stringent exact match metric, we improve from 6.1% to 18.2%, and on tokenlevel BLEU, we improve from 67.1 to 77.6. When supervised attention is added, we obtain an additional increase of several points on each scale, achieving peak results of 22.7% accuracy and 79.2 BLEU. class IronbarkProtector(MinionCard): def __init__(self): super().__init__( ’Ironbark Protector’, 8, CHARACTER_CLASS.DRUID, CARD_RARITY.COMMON) def create_minion(self, player): return Minion( 8, 8, taunt=True) Figure 6: Cards with minimal descriptions exhibit a uniform structure that our system almost always predicts correctly, as in this instance. class ManaWyrm(MinionCard): def __init__(self): super().__init__( ’Mana Wyrm’, 1, CHARACTER_CLASS.MAGE, CARD_RARITY.COMMON) def create_minion(self, player): return Minion( 1, 3, effects=[ Effect( SpellCast(), ActionTag( Give(ChangeAttack(1)), SelfSelector())) ]) Figure 7: For many cards with moderately complex descriptions, the implementation follows a functional style that seems to suit our modeling strategy, usually leading to correct predictions. 4.5 Error Analysis and Discussion As the examples in Figures 6-8 show, classes in the HEARTHSTONE dataset share a great deal of common structure. As a result, in the simplest cases, such as in Figure 6, generating the code is simply a matter of matching the overall structure and plugging in the correct values in the initializer and a few other places. In such cases, our system generally predicts the correct code, with the 1146 class MultiShot(SpellCard): def __init__(self): super().__init__( ’Multi-Shot’, 4, CHARACTER_CLASS.HUNTER, CARD_RARITY.FREE) def use(self, player, game): super().use(player, game) targets = copy.copy( game.other_player.minions) for i in range(0, 2): target = game.random_choice(targets) targets.remove(target) target.damage( player.effective_spell_damage(3), self) def can_use(self, player, game): return ( super().can_use(player, game) and (len(game.other_player.minions) >= 2)) class MultiShot(SpellCard): def __init__(self): super().__init__( ’Multi-Shot’, 4, CHARACTER_CLASS.HUNTER, CARD_RARITY.FREE) def use(self, player, game): super().use(player, game) minions = copy.copy( game.other_player.minions) for i in range(0, 3): minion = game.random_choice(minions) minions.remove(minion) def can_use(self, player, game): return ( super().can_use(player, game) and len(game.other_player.minions) >= 3) Figure 8: Cards with nontrivial logic expressed in an imperative style are the most challenging for our system. In this example, our prediction comes close to the gold code, but misses an important statement in addition to making a few other minor errors. (Left) gold code; (right) predicted code. exception of instances in which strings are incorrectly transduced. Introducing a dedicated copying mechanism like the one used by Ling et al. (2016) or more specialized machinery for string transduction may alleviate this latter problem. The next simplest category of card-code pairs consists of those in which the card’s logic is mostly implemented via nested function calls. Figure 7 illustrates a typical case, in which the card’s effect is triggered by a game event (a spell being cast) and both the trigger and the effect are described by arguments to an Effect constructor. Our system usually also performs well on instances like these, apart from idiosyncratic errors that can take the form of under- or overgeneration or simply substitution of incorrect predicates. Cards whose code includes complex logic expressed in an imperative style, as in Figure 8, pose the greatest challenge for our system. Factors like variable naming, nontrivial control flow, and interleaving of code predictable from the description with code required due to the conventions of the library combine to make the code for these cards difficult to generate. In some instances (as in the figure), our system is nonetheless able to synthesize a close approximation. However, in the most complex cases, the predictions deviate significantly from the correct implementation. In addition to the specific errors our system makes, some larger issues remain unresolved. Existing evaluation metrics only approximate the actual metric of interest: functional equivalence. Modifications of BLEU, tree F1, and exact match that canonicalize the code—for example, by anonymizing all variables—may prove more meaningful. Direct evaluation of functional equivalence is of course impossible in general (Sipser, 2006), and practically challenging even for the HEARTHSTONE dataset because it requires integrating with the game engine. Existing work also does not attempt to enforce semantic coherence in the output. Long-distance semantic dependencies, between occurrences of a single variable for example, in particular are not modeled. Nor is well-typedness or executability. Overcoming these evaluation and modeling issues remains an important open problem. 5 Conclusion ASNs provide a modular encoder-decoder architecture that can readily accommodate a variety of tasks with structured output spaces. They are particularly applicable in the presence of recursive decompositions, where they can provide a simple decoding process that closely parallels the inherent structure of the outputs. Our results demonstrate their promise for tree prediction tasks, and we believe their application to more general output structures is an interesting avenue for future work. Acknowledgments MR is supported by an NSF Graduate Research Fellowship and a Fannie and John Hertz Foundation Google Fellowship. MS is supported by an NSF Graduate Research Fellowship. 1147 References Alfred V. Aho, Monica S. Lam, Ravi Sethi, and Jeffrey D. Ullman. 2006. Compilers: Principles, Techniques, and Tools (2Nd Edition). Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA. Miltiadis Allamanis, Daniel Tarlow, Andrew D. Gordon, and Yi Wei. 2015. Bimodal modelling of source code and natural language. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015. pages 2123–2132. David Alvarez-Melis and Tommi S. Jaakkola. 2017. Tree-structured decoding with doubly-recurrent neural networks. In Proceedings of the International Conference on Learning Representations (ICLR) 2017. Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016. Neural module networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Oral. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473. Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, and Daniel Tarlow. 2016. Deepcoder: Learning to write programs. CoRR abs/1611.01989. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. J. Mach. Learn. Res. 3:1137–1155. http://dl.acm.org/citation.cfm?id=944919.944966. Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python. O’Reilly Media, Inc., 1st edition. Danqi Chen and Christopher D. Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL. pages 740–750. http://aclweb.org/anthology/D/D14/D14-1082.pdf. James Cross and Liang Huang. 2016. Span-based constituency parsing with a structure-label system and provably optimal dynamic oracles. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016. pages 1– 11. Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. CoRR abs/1601.01280. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016. pages 199–209. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS10). Society for Artificial Intelligence and Statistics. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR abs/1412.6980. http://arxiv.org/abs/1412.6980. Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke S. Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013, 18-21 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Special Interest Group of the ACL. pages 1545–1556. Percy Liang, Michael I. Jordan, and Dan Klein. 2010. Learning programs: A hierarchical bayesian approach. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), June 21-24, 2010, Haifa, Israel. pages 639–646. Percy Liang, Michael I. Jordan, and Dan Klein. 2013. Learning dependency-based compositional semantics. Comput. Linguist. 39(2):389–446. https://doi.org/10.1162/COLI a 00127. Wang Ling, Phil Blunsom, Edward Grefenstette, Karl Moritz Hermann, Tom´as Kocisk´y, Fumin Wang, and Andrew Senior. 2016. Latent predictor networks for code generation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. Chris J. Maddison and Daniel Tarlow. 2014. Structured generative models of natural source code. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 2126 June 2014. pages 649–657. Aditya Krishna Menon, Omer Tamuz, Sumit Gulwani, Butler W. Lampson, and Adam Kalai. 2013. A machine learning framework for programming by example. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013. pages 187–195. Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke 1148 Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, and Pengcheng Yin. 2017. Dynet: The dynamic neural network toolkit. arXiv preprint arXiv:1701.03980 . Ana-Maria Popescu, Oren Etzioni, and Henry Kautz. 2003. Towards a theory of natural language interfaces to databases. In Proceedings of the 8th international conference on Intelligent user interfaces. ACM, pages 149–157. Richard Shin, Alexander A. Alemi, Geoffrey Irving, and Oriol Vinyals. 2017. Tree-structured variational autoencoder. In Proceedings of the International Conference on Learning Representations (ICLR) 2017. Michael Sipser. 2006. Introduction to the Theory of Computation. Course Technology, second edition. Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey E. Hinton. 2015. Grammar as a foreign language. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada. pages 2773–2781. Adrienne Wang, Tom Kwiatkowski, and Luke S Zettlemoyer. 2014. Morpho-syntactic lexical generalization for ccg semantic parsing. In EMNLP. pages 1284–1295. Daniel C. Wang, Andrew W. Appel, Jeff L. Korn, and Christopher S. Serra. 1997. The zephyr abstract syntax description language. In Proceedings of the Conference on Domain-Specific Languages on Conference on Domain-Specific Languages (DSL), 1997. USENIX Association, Berkeley, CA, USA, DSL’97, pages 17–17. Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In UAI ’05, Proceedings of the 21st Conference in Uncertainty in Artificial Intelligence, Edinburgh, Scotland, July 26-29, 2005. pages 658–666. Luke S. Zettlemoyer and Michael Collins. 2007. Online learning of relaxed ccg grammars for parsing to logical form. In In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL-2007. pages 678– 687. Kai Zhao and Liang Huang. 2015. Type-driven incremental semantic parsing with polymorphism. In NAACL HLT 2015, The 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Denver, Colorado, USA, May 31 - June 5, 2015. pages 1416–1421. A Appendix expr = Apply(pred predicate, arg* arguments) | Not(expr argument) | Or(expr left, expr right) | And(expr* arguments) arg = Literal(lit literal) | Variable(var variable) Figure 9: The Prolog-style grammar we use for the JOBS task. expr = Variable(var variable) | Entity(ent entity) | Number(num number) | Apply(pred predicate, expr* arguments) | Argmax(var variable, expr domain, expr body) | Argmin(var variable, expr domain, expr body) | Count(var variable, expr body) | Exists(var variable, expr body) | Lambda(var variable, var_type type, expr body) | Max(var variable, expr body) | Min(var variable, expr body) | Sum(var variable, expr domain, expr body) | The(var variable, expr body) | Not(expr argument) | And(expr* arguments) | Or(expr* arguments) | Compare(cmp_op op, expr left, expr right) cmp_op = Equal | LessThan | GreaterThan Figure 10: The λ-calculus grammar used by our system. 1149
2017
105
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1150–1159 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1106 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1150–1159 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1106 Visualizing and Understanding Neural Machine Translation Yanzhuo Ding† Yang Liu†‡∗Huanbo Luan† Maosong Sun†‡ †State Key Laboratory of Intelligent Technology and Systems Tsinghua National Laboratory for Information Science and Technology Department of Computer Science and Technology, Tsinghua University, Beijing, China ‡Jiangsu Collaborative Innovation Center for Language Competence, Jiangsu, China [email protected], [email protected] [email protected], [email protected] Abstract While neural machine translation (NMT) has made remarkable progress in recent years, it is hard to interpret its internal workings due to the continuous representations and non-linearity of neural networks. In this work, we propose to use layer-wise relevance propagation (LRP) to compute the contribution of each contextual word to arbitrary hidden states in the attention-based encoderdecoder framework. We show that visualization with LRP helps to interpret the internal workings of NMT and analyze translation errors. 1 Introduction End-to-end neural machine translation (NMT), which leverages neural networks to directly map between natural languages, has gained increasing popularity recently (Sutskever et al., 2014; Bahdanau et al., 2015). NMT proves to outperform conventional statistical machine translation (SMT) significantly across a variety of language pairs (Junczys-Dowmunt et al., 2016) and becomes the new de facto method in practical MT systems (Wu et al., 2016). However, there still remains a severe challenge: it is hard to interpret the internal workings of NMT. In SMT (Koehn et al., 2003; Chiang, 2005), the translation process can be denoted as a derivation that comprises a sequence of translation rules (e.g., phrase pairs and synchronous CFG rules). Defined on language structures with varying granularities, these translation rules are interpretable from a linguistic perspective. In contrast, NMT takes an end-to-end approach: all internal information is represented as real-valued vectors or ∗Corresponding author. matrices. It is challenging to associate hidden states in neural networks with interpretable language structures. As a result, the lack of interpretability makes it very difficult to understand translation process and debug NMT systems. Therefore, it is important to develop new methods for visualizing and understanding NMT. Existing work on visualizing and interpreting neural models has been extensively investigated in computer vision (Krizhevsky et al., 2012; Mahendran and Vedaldi, 2015; Szegedy et al., 2014; Simonyan et al., 2014; Nguyen et al., 2015; Girshick et al., 2014; Bach et al., 2015). Although visualizing and interpreting neural models for natural language processing has started to attract attention recently (Karpathy et al., 2016; Li et al., 2016), to the best of our knowledge, there is no existing work on visualizing NMT models. Note that the attention mechanism (Bahdanau et al., 2015) is restricted to demonstrate the connection between words in source and target languages and unable to offer more insights in interpreting how target words are generated (see Section 4.5). In this work, we propose to use layer-wise relevance propagation (LRP) (Bach et al., 2015) to visualize and interpret neural machine translation. Originally designed to compute the contributions of single pixels to predictions for image classifiers, LRP back-propagates relevance recursively from the output layer to the input layer. In contrast to visualization methods relying on derivatives, a major advantage of LRP is that it does not require neural activations to be differentiable or smooth (Bach et al., 2015). We adapt LRP to the attention-based encoder-decoder framework (Bahdanau et al., 2015) to calculate relevance that measures the association degree between two arbitrary neurons in neural networks. Case studies on Chinese-English translation show that visualization helps to interpret the internal workings of 1150 在 纽约 zai niuyue </s> in New </s> York source words source word embeddings source forward hidden states source backward hidden states source hidden states source contexts target hidden states target word embeddings target words attention Figure 1: The attention-based encoder-decoder architecture for neural machine translation (Bahdanau et al., 2015). NMT and analyze translation errors. 2 Background Given a source sentence x = x1, . . . , xi, . . . , xI with I source words and a target sentence y = y1, . . . , yj, . . . , yJ with J target words, neural machine translation (NMT) decomposes the sentence-level translation probability as a product of word-level translation probabilities: P(y|x; θ) = J Y j=1 P(yj|x, y<j; θ), (1) where y<j = y1, . . . , yj−1 is a partial translation. In this work, we focus on the attention-based encoder-decoder framework (Bahdanau et al., 2015). As shown in Figure 1, given a source sentence x, the encoder first uses source word embeddings to map each source word xi to a real-valued vector xi.1 Then, a forward recurrent neural network (RNN) with GRU units (Cho et al., 2014) runs to calculate source forward hidden states: −→ h i = f(−→ h i−1, xi), (2) where f(·) is a non-linear function. Similarly, the source backward hidden states can be obtained using a backward RNN: ←− h i = f(←− h i+1, xi). (3) 1Note that we use x to denote a source sentence and x to denote the vector representation of a single source word. To capture global contexts, the forward and backward hidden states are concatenated as the hidden state for each source word: hi = [−→ h i; ←− h i]. (4) Bahdanau et al. (2015) propose an attention mechanism to dynamically determine the relevant source context cj for each target word: cj = I+1 X i=1 αj,ihi, (5) where αj,i is an attention weight that indicates how well the source word xi and the target word yj match. Note that an end-of-sentence token is appended to the source sentence. In the decoder, a target hidden state for the j-th target word is calculated as sj = g(sj−1, yj, cj), (6) where g(·) is a non-linear function, yj−1 denotes the vector representation of the (j −1)-th target word. Finally, the word-level translation probability is given by P(yj|x, y<j; θ) = ρ(yj−1, sj, cj), (7) where ρ(·) is a non-linear function. Although NMT proves to deliver state-of-theart translation performance with the capability to handle long-distance dependencies due to GRU and attention, it is hard to interpret the internal information such as −→ h i, ←− h i, hi, cj, and sj in the encoder-decoder framework. Though projecting word embedding space into two dimensions (Faruqui and Dyer, 2014) and the attention matrix (Bahdanau et al., 2015) shed partial light on how NMT works, how to interpret the entire network still remains a challenge. Therefore, it is important to develop new methods for understanding the translation process and analyzing translation errors for NMT. 3 Approach 3.1 Problem Statement Recent efforts on interpreting and visualizing neural models has focused on calculating the contribution of a unit at the input layer to the final decision at the output layer (Simonyan et al., 2014; Mahendran and Vedaldi, 2015; Nguyen et al., 2015; 1151 in New </s> York 在 纽约</s> in New zai niuyue Figure 2: Visualizing the relevance between the vector representation of a target word “New York” and those of all source words and preceding target words. Girshick et al., 2014; Bach et al., 2015; Li et al., 2016). For example, in image classification, it is important to understand the contribution of a single pixel to the prediction of classifier (Bach et al., 2015). In this work, we are interested in calculating the contribution of source and target words to the following internal information in the attention-based encoder-decoder framework: 1. −→ h i: the i-th source forward hidden state, 2. ←− h i: the i-th source backward hidden state, 3. hi: the i-th source hidden state, 4. cj: the j-th source context vector, 5. sj: the j-th target hidden state, 6. yj: the j-th target word embedding. For example, as shown in Figure 2, the generation of the third target word “York” depends on both the source context (i.e., the source sentence “zai niuyue </s>”) and the target context (i.e., the partial translation “in New”). Intuitively, the source word “niuyue” and the target word “New” are more relevant to “York” and should receive higher relevance than other words. The problem is how to quantify and visualize the relevance between hidden states and contextual word vectors. More formally, we introduce a number of definitions to facilitate the presentation. Definition 1 The contextual word set of a hidden state v ∈RM×1 is denoted as C(v), which is a set of source and target contextual word vectors u ∈RN×1 that influences the generation of v. Figure 3: A simple feed-forward network for illustrating layer-wise relevance propagation (Bach et al., 2015). For example, the context word set for −→ h i is {x1, . . . , xi}, for ←− h i is {xi, . . . , xI+1}, and for hi is {x1, . . . , xI+1}. The contextual word set for cj is {x1, . . . , xI+1}, for sj and yj is {x1, . . . , xI+1, y1, . . . , yj−1}. As both hidden states and contextual words are represented as real-valued vectors, we need to factorize vector-level relevance at the neuron level. Definition 2 The neuron-level relevance between the m-th neuron in a hidden state vm ∈R and the n-th neuron in a contextual word vector un ∈R is denoted as run←vm ∈R, which satisfies the following constraint: vm = X u∈C(v) N X n=1 run←vm (8) Definition 3 The vector-level relevance between a hidden state v and one contextual word vector u ∈C(v) is denoted as Ru←v ∈R, which quantifies the contribution of u to the generation of v. It is calculated as Ru←v = M X m=1 N X n=1 run←vm (9) Definition 4 The relevance vector of a hidden state v is a sequence of vector-level relevance of its contextual words: Rv = {Ru1←v, . . . , Ru|C(v)|←v} (10) Therefore, our goal is to compute relevance vectors for hidden states in a neural network, as shown in Figure 2. The key problem is how to compute neuron-level relevance. 3.2 Layer-wise Relevance Propagation We follow (Bach et al., 2015) to use layer-wise relevance propagation (LRP) to compute neuronlevel relevance. We use a simple feed-forward network shown in Figure 3 to illustrate the central idea of LRP. 1152 Input: A neural network G for a sentence pair and a set of hidden states to be visualized V. Output: Vector-level relevance set R. 1 for u ∈G in a forward topological order do 2 for v ∈OUT(u) do 3 calculating weight ratios wu→v; 4 end 5 end 6 for v ∈V do 7 for v ∈v do 8 rv←v = v; // initializing neuron-level relevance 9 end 10 for u ∈G in a backward topological order do 11 ru←v = P z∈OUT(u) wu→zrz←v; // calculating neuron-level relevance 12 end 13 for u ∈C(v) do 14 Ru←v = P u∈u P v∈v ru←v ; // calculating vector-level relevance 15 R = R ∪{Ru←v}; // Update vector-level relevance set 16 end 17 end Algorithm 1: Layer-wise relevance propagation for neural machine translation. LRP first propagates the relevance from the output layer to the intermediate layer: rz1←v1 = W(2) 1,1z1 W(2) 1,1z1 + W(2) 2,1z2 v1 (11) rz2←v1 = W(2) 2,1z2 W(2) 1,1z1 + W(2) 2,1z2 v1 (12) Note that we ignore the non-linear activation function because Bach et al. (2015) indicate that LRP is invariant against the choice of non-linear function. Then, the relevance is further propagated to the input layer: ru1←v1 = W(1) 1,1u1 W(1) 1,1u1 + W(1) 2,1u2 rz1←v1 + W(1) 1,2u1 W(1) 1,2u1 + W(1) 2,2u2 rz2←v1 (13) ru2←v1 = W(1) 2,1u2 W(1) 1,1u1 + W(1) 2,1u2 rz1←v1 + W(1) 2,2u2 W(1) 1,2u1 + W(1) 2,2u2 rz2←v1 (14) Note that ru1←v1 + ru2←v1 = v1. More formally, we introduce the following definitions to ease exposition. Definition 5 Given a neuron u, its incoming neuron set IN(u) comprises all its direct connected preceding neurons in the network. For example, in Figure 3, the incoming neuron set of z1 is IN(z1) = {u1, u2}. Definition 6 Given a neuron u, its outcoming neuron set OUT(u) comprises all its direct connected descendant neurons in the network. For example, in Figure 3, the incoming neuron set of z1 is OUT(z1) = {v1, v2}. Definition 7 Given a neuron v and its incoming neurons u ∈IN(v), the weight ratio that measures the contribution of u to v is calculated as wu→v = Wu,vu P u′∈IN(v) Wu′,vu′ (15) Although the NMT model usually involves multiple operators such as matrix multiplication, element-wise multiplication, and maximization, they only influence the way to calculate weight ratios in Eq. (15). For matrix multiplication such as v = Wu, its basic form that is calculated at the neuron level is given by v = P u∈IN(v) Wu,vu . We follow Bach et al. (2015) to calculate the weight ratio using Eq. (15). 1153 近 两 jin liang 年 nian 来 lai , 美国 , meiguo 近 两 年 来 , 美国 jin liang nian lai , meiguo 1 2 3 4 5 6 1 2 3 4 5 6 Figure 4: Visualizing source hidden states for a source content word “nian” (years). For element-wise multiplication such as v = u1◦u2, its basic form is given by v = Q u∈IN(v) u. We use the following method to calculate its weight ratio: wu→v = u P u′∈IN(v) u′ (16) For maximization such as v = max{u1, u2}, we calculate its weight ratio as follows: wu→v =  1 if u = maxu′∈IN(v){u′} 0 otherwise (17) Therefore, the general local redistribution rule for LRP is given by ru←v = X z∈OUT(u) wu→zrz←v (18) Algorithm 1 gives the layer-wise relevance propagation algorithm for neural machine translation. The input is an attention-based encoderdecoder neural network for a sentence pair after decoding G and a set of hidden states to be visualized V. The output is a set of vector-level relevance between intended hidden states and their contextual words R. The algorithm first computes weight ratios for each neuron in a forward pass (lines 1-4). Then, for each hidden state to be visualized (line 6), the algorithm initializes the neuron-level relevance for itself (lines 7-9). After initialization, the neuron-level relevance is backpropagated through the network (lines 10-12). Finally, vector-level relevance is calculated based on neuron-level relevance (lines 13-16). The time complexity of Algorithm 1 is O(|G|×|V|×Omax), 我 参拜 是 为了 祈求 my wo canbai shi weile qiqiu my visit to is pray 1 2 3 4 5 1 2 3 4 5 1 Figure 5: Visualizing target hidden states for a target content word “visit”. where |G| is the number of neuron units in the neural network G, |V| is the number of hidden states to be visualized and Omax is the maximum of outdegree for neurons in the network. Calculating relevance is more computationally expensive than computing attention as it involves all neurons in the network. Fortunately, it is possible to take advantage of parallel architectures of GPUs and relevance caching for speed-up. 4 Analysis 4.1 Data Preparation We evaluate our approach on Chinese-English translation. The training set consists of 1.25M pairs of sentences with 27.93M Chinese words and 34.51M English words. We use the NIST 2003 dataset as the development set for model selection and the NIST 2004 dataset as test set. The BLEU score on NIST 2003 is 32.73. We use the open-source toolkit GROUNDHOG (Bahdanau et al., 2015), which implements the attention-based encoder-decoder framework. After model training and selection on the training and development sets, we use the resulting NMT model to translate the test set. Therefore, the visualization examples in the following subsections are taken from the test set. 4.2 Visualization of Hidden States 4.2.1 Source Side Figure 4 visualizes the source hidden states for a source content word “nian” (years). For each word in the source string “jin liang nian lai , meiguo” (in recent two years, USA), we attach a number 1154 the 𝐥𝐚𝐫𝐠𝐞𝐬𝐭 UNK in 𝐭𝐡𝐞 𝐰𝐨𝐫𝐥𝐝 zhaiwuguo 世界 2 3 4 5 6 7 最 大 的债务国* , the largest 2 3 4 5 6 7 2 3 de da zui shijie , Figure 6: Visualizing target hidden states for a target UNK word. to denote the position of the word in the sentence. For example, “nian” (years) is the third word. We are interested in visualizing the relevance between the third source forward hidden state −→ h 3 and all its contextual words “jin” (recent) and “liang” (two). We observe that the direct preceding word “liang” (two) contributes more to forming the forward hidden state of “nian” (years). For the third source backward hidden state ←− h 3, the relevance of contextual words generally decreases with the increase of the distance to “nian” (years). Clearly, the concatenation of forward and backward hidden states h3 capture contexts in both directions. The situations for function words and punctuation marks are similar but the relevance is usually more concentrated on the word itself. We omit the visualization due to space limit. 4.2.2 Target Side Figure 5 visualizes the target-side hidden states for the second target word “visit”. For comparison, we also give the attention weights α2, which correctly identifies the second source word “canbai” (“visit”) is most relevant to “visit”. The relevance vector of the source context c2 is generally consistent with the attention but reveals that the third word “shi” (is) also contributes to the generation of “visit”. For the target hidden state s2, the contextual word set includes the first target word “my”. We find that most contextual words receive high values of relevance. This phenomenon has been frequently observed for most target words in other sentences. Note that relevance vector is not normalized. This is an essential difference between vote of confidence 参 6 7 8 众 两 院 5 6 7 8 9 yuan liang zhong can in the10 9 senate the 10 senate </s> 11 12 信任 投票 </s>11 10 11 xinren toupiao </s> Figure 7: Analyzing translation error: word omission. The 6-th source word “zhong” is untranslated incorrectly. attention and relevance. While attention is defined to be normalized, the only constraint on relevance is that the sum of relevance of contextual words is identical to the value of intended hidden state neuron. For the target word embedding y2, the relevance is generally consistent with the attention by identifying that the second source word contributes more to the generation of “visit”. But Ry2 further indicates that the target word “my” is also very important for generating “visit”. Figure 6 shows the hidden states of a target UNK word, which is very common to see in NMT because of limited vocabulary. It is interesting to investigate whether the attention mechanism could put a UNK in the right place in the translation. In this example, the 6-th source word “zhaiwuguo” is a UNK. We find that the model successfully predicts the correct position of UNK by exploiting surrounding source and target contexts. But the ordering of UNK usually becomes worse if multiple UNK words exist on the source side. 4.3 Translation Error Analysis Given the visualization of hidden states, it is possible to offer useful information for analyzing translation errors commonly observed in NMT such as word omission, word repetition, unrelated words and negation reversion. 4.3.1 Word Omission Given a source sentence “bajisitan zongtong muxialafu yingde can zhong liang yuan xinren toupiao” (pakistani president musharraf wins votes of confidence in senate and house), the NMT model pro1155 the history of 美国人 2 3 4 历史 上 有 1 2 3 4 4 you shang lishi meiguoren the history6 5 of the5 Figure 8: Analyzing translation error: word repetition. The target word “history” occurs twice in the translation incorrectly. duces a wrong translation “pakistani president win over democratic vote of confidence in the senate”. One translation error is that the 6-th source word “zhong” (house) is incorrectly omitted for translation. As the end-of-sentence token “</s>” occurs early than expected, we choose to visualize its corresponding target hidden states. Although the attention correctly identifies the 6-th source word “zhong” (house) to be important for generating the next target word, the relevance of source context Rc12 attaches more importance to the end-ofsentence token. Finally, the relevance of target word Ry12 reveals that the end-of-sentence token and the 11-th target word “senate” become dominant in the softmax layer for generating the target word. This example demonstrates that only using attention matrices does not suffice to analyze the internal workings of NMT. The values of relevance of contextual words might vary significantly across different layers. 4.3.2 Word Repetition Given a source sentence “meiguoren lishi shang you jiang chengxi de chuantong , you fancuo rencuo de chuantong” (in history , the people of america have the tradition of honesty and would not hesitate to admit their mistakes), the NMT model produces a wrong translation “in the history of the history of the history of the americans , there is a tradition of faith in the history of mistakes”. The is to forge ahead . </s> </s> 是 7 8 9 10 11 12 跨大西洋 关系 。 </s> is to 9 10 11 12 13 6 7 . guanxi kuadaxiyang is Figure 9: Analyzing translation error: unrelated words. The 9-th target word “forge” is totally unrelated to the source sentence. translation error is that “history” repeats four times in the translation. Figure 8 visualizes the target hidden states of the 6-th target word “history”. According to the relevance of the target word embedding Ry6, the first source word “meiguoren” (american), the second source word “lishi” (history) and the 5-th target word “the” are most relevant to the generation of “history”. Therefore, word repetition not only results from wrong attention but also is significantly influenced by target side context. This finding confirms the importance of controlling source and target contexts to improve fluency and adequacy (Tu et al., 2017). 4.3.3 Unrelated Words Given a source sentence “ci ci huiyi de yi ge zhongyao yiti shi kuadaxiyang guanxi” (one the the top agendas of the meeting is to discuss the cross-atlantic relations), the model prediction is “a key topic of the meeting is to forge ahead”. One translation error is that the 9-th English word “forge” is totally unrelated to the source sentence. Figure 9 visualizes the hidden states of the 9-th target word “forge”. We find that while the attention identifies the 10-th source word “kuadaxiyang” (cross-atlantic) to be most relevant, the relevance vector of the target word Ry9 finds that multiple source and target words should contribute to the generation of the next target word. We observe that unrelated words are more likely to occur if multiple contextual words have high 1156 we will talk 就 11 12 13 谈 不 上 6 7 8 9 10 shang bu tan jiu about development15 14 talk will12 发展 13 fazhan Figure 10: Analyzing translation error: negation. The 8-th negation source word “bu” (not) is not translated. values in the relevance vector of the target word being generated. 4.3.4 Negation Reversion Given a source sentence “bu jiejue shengcun wenti , jiu tan bu shang fa zhan , geng tan bu shang ke chixu fazhan” (without solution to the issue of subsistence , there will be no development to speak of , let alone sustainable development), the model prediction is “if we do not solve the problem of living , we will talk about development and still less can we talk about sustainable development”. The translation error is that the 8-th negation source word “bu” (not) is untranslated. The omission of negation is a severe translation error it reverses the meaning of the source sentence. As shown in Figure 10, while both attention and relevance correctly identify the 8-th negation word “bu” (not) to be most relevant, the model still generates “about” instead of a negation target word. One possible reason is that target context words “will talk” take the lead in determining the next target word. 4.4 Extra Words Given a source sentence “bajisitan zongtong muxialafu yingde can zhong liang yuan xinren toupiao”(pakistani president musharraf wins votes of confidence in senate and house), the model prediction is “pakistani president win over democratic vote of confidence in the senate” The translation error is that the 5-th target word “democratic” is extra generated. democratic vote of confidence 两 5 6 7 8 院 信任 投票 </s> 7 8 9 10 11 toupiao xinren yuan liang in the10 9 win over 3 4 </s> Figure 11: Analyzing translation error: extra word. The 5-th target word “democratic” is an extra word. Figure 11 visualizes the hidden states of the 9-th target word “forge”. We find that while the attention identifies the 9-th source word “xinren”(confidence) to be most relevant, the relevance vector of the target word Ry9 indicates that the end-of-sentence token and target words contribute more to the generation of “democratic”. 4.5 Summary of Findings We summarize the findings of visualizing and analyzing the decoding process of NMT as follows: 1. Although attention is very useful for understanding the connection between source and target words, only using attention is not sufficient for deep interpretation of target word generation (Figure 9); 2. The relevance of contextual words might vary significantly across different layers of hidden states (Figure 9); 3. Target-side context also plays a critical role in determining the next target word being generated. It is important to control both source and target contexts to produce correct translations (Figure 10); 4. Generating the end-of-sentence token too early might lead to many problems such as word omission, unrelated word generation, and truncated translation (Figures 7 and 9). 1157 5 Related Work Our work is closely related to previous visualization approaches that compute the contribution of a unit at the input layer to the final decision at the output layer (Simonyan et al., 2014; Mahendran and Vedaldi, 2015; Nguyen et al., 2015; Girshick et al., 2014; Bach et al., 2015; Li et al., 2016). Among them, our approach bears most resemblance to (Bach et al., 2015) since we adapt layer-wise relevance propagation to neural machine translation. The major difference is that word vectors rather than single pixels are the basic units in NMT. Therefore, we propose vectorlevel relevance based on neuron-level relevance for NMT. Calculating weight ratios has also been carefully designed for the operators in NMT. The proposed approach also differs from (Li et al., 2016) in that we use relevance rather than partial derivative to quantify the contributions of contextual words. A major advantage of using relevance is that it does not require neural activations to be differentiable or smooth (Bach et al., 2015). The relevance vector we used is significantly different from the attention matrix (Bahdanau et al., 2015). While attention only demonstrates the association degree between source and target words, relevance can be used to calculate the association degree between two arbitrary neurons in neural networks. In addition, relevance is effective in analyzing the effect of source and target contexts on generating target words. 6 Conclusion In this work, we propose to use layer-wise relevance propagation to visualize and interpret neural machine translation. Our approach is capable of calculating the relevance between arbitrary hidden states and contextual words by back-propagating relevance along the network recursively. Analyses of the state-of-art attention-based encoder-decoder framework on Chinese-English translation show that our approach is able to offer more insights than the attention mechanism for interpreting neural machine translation. In the future, we plan to apply our approach to more NMT approaches (Sutskever et al., 2014; Shen et al., 2016; Tu et al., 2016; Wu et al., 2016) on more language pairs to further verify its effectiveness. It is also interesting to develop relevancebased neural translation models to explicitly control relevance to produce better translations. Acknowledgements This work is supported by the National Natural Science Foundation of China (No.61522204), the 863 Program (2015AA015407), and the National Natural Science Foundation of China (No.61432013). This research is also supported by the Singapore National Research Foundation under its International Research Centre@Singapore Funding Initiative and administered by the IDM Programme. References Sebastian Bach, Alexander Binder, Gr´egoire Montavon, Frederick Klauschen, Klaus-Robert M¨uller, and Wojciech Samek. 2015. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE . Dzmitry Bahdanau, KyungHyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR. Davie Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of ACL. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of EMNLP. Mannal Faruqui and Chris Dyer. 2014. Improving vector space word representations using multilingual correlation. In Proceedings of EACL. Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. 2014. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of CVPR. Marcin Junczys-Dowmunt, Tomasz Dwojak, and Hieu Hoang. 2016. Is neural machine translation ready for deployment? a case study on 30 translation directions. arXiv:1610.01108v2. Andrej Karpathy, Justin Johnson, and Fei-Fei Li. 2016. Visualing and understanding recurrent networks. In Proceedings of ICLR Workshop. Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of NAACL. Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton. 2012. Imagenet classification with deep convolutional nerual networks. In Proceedings of NIPS. 1158 Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016. Visualizing and understanding neural models in nlp. In Proceedings of NAACL. Aravindh Mahendran and Andrea Vedaldi. 2015. Understanding deep image representations by inverting them. In Proceedings of CVPR. Anh Nguyen, Jason Yosinski, and Jeff Clune. 2015. Deep neural networks are easily fooled: High confidence predictions for unrecignizable images. In Proceedings of CVPR. Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum risk training for neural machine translation. In Proceedings of ACL. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2014. Deep inside convolutional networks: Visualizing image classification models and saliency maps. In Proceedings of ICLR Workshop. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of NIPS. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In Proceedings of ICLR. Zhaopeng Tu, Yang Liu, Lifeng Shang, Xiaohua Liu, and Hang Li. 2017. Context gates for neural machine translation. Transactions of the ACL . Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In Proceedings of ACL. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv:1609.08144v2. 1159
2017
106
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1160–1170 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1107 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1160–1170 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1107 Detecting annotation noise in automatically labelled data Ines Rehbein Josef Ruppenhofer IDS Mannheim/University of Heidelberg, Germany Leibniz Science Campus “Empirical Linguistics and Computational Language Modeling” [email protected], [email protected] Abstract We introduce a method for error detection in automatically annotated text, aimed at supporting the creation of high-quality language resources at affordable cost. Our method combines an unsupervised generative model with human supervision from active learning. We test our approach on in-domain and out-of-domain data in two languages, in AL simulations and in a real world setting. For all settings, the results show that our method is able to detect annotation errors with high precision and high recall. 1 Introduction Until recently, most of the work in Computational Linguistics has been focussed on standard written text, often from newswire. The emergence of two new research areas, Digital Humanities and Computational Sociolinguistics, have however shifted the interest towards large, noisy text collections from various sources. More and more researchers are working with social media text, historical data, or spoken language transcripts, to name but a few. Thus the need for NLP tools that are able to process this data has become more and more apparent, and has triggered a lot of work on domain adaptation and on developing more robust preprocessing tools. Studies are usually carried out on large amounts of data, and thus fully manual annotation or even error correction of automatically prelabelled text is not feasible. Given the importance of identifying noisy annotations in automatically annotated data, it is all the more surprising that up to now this area of research has been severely understudied. This paper addresses this gap and presents a method for error detection in automatically labelled text. As test cases, we use POS tagging and Named Entity Recognition, both standard preprocessing steps for many NLP applications. However, our approach is general and can also be applied to other classification tasks. Our approach is based on the work of Hovy et al. (2013) who develop a generative model for estimating the reliability of multiple annotators in a crowdsourcing setting. We adapt the generative model to the task of finding errors in automatically labelled data by integrating it in an active learning (AL) framework. We first show that the approach of Hovy et al. (2013) on its own is not able to beat a strong baseline. We then present our integrated model, in which we impose human supervision on the generative model through AL, and show that we are able to achieve substantial improvements in two different tasks and for two languages. Our contributions are the following. We provide a novel approach to error detection that is able to identify errors in automatically labelled text with high precision and high recall. To the best of our knowledge, our method is the first that addresses this task in an AL framework. We show how AL can be used to guide an unsupervised generative model, and we will make our code available to the research community.1 Our approach works particularly well in out-of-domain settings where no annotated training data is yet available. 2 Related work Quite a bit of work has been devoted to the identifcation of errors in manually annotated corpora (Eskin, 2000; van Halteren, 2000; Kveton and Oliva, 2002; Dickinson and Meurers, 2003; Loftsson, 2009; Ambati et al., 2011). 1Our code is available at http://www.cl. uni-heidelberg.de/˜rehbein/resources. 1160 Several studies have tried to identify trustworthy annotators in crowdsourcing settings (Snow et al., 2008; Bian et al., 2009), amongst them the work of Hovy et al. (2013) described in Section 3. Others have proposed selective relabelling strategies when working with non-expert annotators (Sheng et al., 2008; Zhao et al., 2011). Manual annotations are often inconsistent and annotation errors can thus be identified by looking at the variance in the data. In contrast to this, we focus on detecting errors in automatically labelled data. This is a much harder problem as the annotation errors are systematic and consistent and therefore hard to detect. Only a few studies have addressed this problem. One of them is Rocio et al. (2007) who adapt a multiword unit extraction algorithm to detect automatic annotation errors in POS tagged corpora. Their semi-automatic method is geared towards finding (a small number of) high frequency errors in large datasets, often caused by tokenisation errors. Their algorithm extracts sequences that have to be manually sorted into linguistically sound patterns and erroneous patterns. Loftsson (2009) tests several methods for error detection in POS tagged data, one of them based on the predictions of an ensemble of 5 POS taggers. Error candidates are those tokens for which the predictions of all ensemble taggers agree but that diverge from the manual annotation. This simple method yields a precision of around 16% (no. of true positives amongst the error candidates), but no information is given about the recall of the method, i.e. how many of the errors in the corpus have been identified. Rehbein (2014) extends the work of Loftsson (2009) by training a CRF classifier on the output of ensemble POS taggers. This results in a much higher precision, but with low recall (for a precision in the range of 50-60% they report a recall between 10-20%). Also related is work that addresses the issue of learning in the presence of annotation noise (Reidsma and Carletta, 2008; Beigman and Klebanov, 2009; Bekker and Goldberger, 2016). The main difference to our work lies in its different focus. While our focus is on identifying errors with the goal of improving the quality of an existing language resource, their main objective is to improve the accuracy of a machine learning system. In the next section we describe the approach of Hovy et al. (2013) and present our adaptation Algorithm 1 AL with variational inference Input: classifier predictions A 1: for 1 ... n iterations do 2: procedure GENERATE(A) 3: for i = 1 ... n classifiers do 4: Ti ∼Uniform 5: for j = 1 ... n instances do 6: Sij ∼Bernoulli(1 −θj) 7: if Sij = 0 then 8: Aij = Ti 9: else 10: Aij ∼Multinomial(ξj) 11: end if 12: end for 13: end for 14: return posterior entropies E 15: end procedure 16: procedure ACTIVELEARNING(A) 17: rank J →max(E) 18: for j = 1 ... n instances do 19: Oracle →label(j); 20: select random classifier i; 21: update model prediction for i(j); 22: end for 23: end procedure 24: end for for semi-supervised error detection that combines Bayesian inference with active learning. 3 Method 3.1 Modelling human annotators Hovy et al. (2013) develop a generative model for Multi-Annotator Competence Estimation (MACE) to determine which annotators to trust in a crowdsourcing setting (Algorithm 1, lines 2-15). MACE implements a simple graphical model where the input consists of all annotated instances I by a set of J annotators. The model generates the observed annotations A as follows. The (unobserved) “true” label Ti is sampled from a uniform prior, based on the assumption that the annotators always try to predict the correct label and thus the majority of the annotations should, more often than not, be correct. The model is unsupervised, meaning that no information on the real gold labels is available. To model each annotator’s behaviour, a binary variable Sij (also unobserved) is drawn from a Bernoulli distribution that describes whether annotator j is trying to predict the correct label for instance i or whether s/he is just spamming (a behaviour not uncommon in a crowdsourcing setting). If Sij is 0, the “true” label Ti is used to generate the annotation Aij. If Sij is 1, the predicted label Aij for instance i comes from a multinomial distribution with parameter vector ξj. 1161 The model parameter θj can be interpreted as a “trustworthiness” parameter that describes the probability that annotator j predicts the correct label. ξj, on the other hand, contains information about the actual behaviour of annotator j in the case that the annotator is not trying to predict the correct label. The model parameters are learned by maximizing the marginal likelihood of the observed data, using Expectation Maximization (EM) (Dempster et al., 1977) and Bayesian variational inference. Bayesian inference is used to provide the model with priors on the annotators’ behaviour and yields improved correlations over EM between the model estimates and the annotators’ proficiency while keeping accuracy high. For details on the implementation and parameter settings refer to Hovy et al. (2013) and Johnson (2007). We adapt the model of Hovy et al. (2013) and apply it to the task of error detection in automatically labelled text. To that end, we integrate the variational model in an active learning (AL) setting, with the goal of identifying as many errors as possible while keeping the number of instances to be checked as small as possible. The tasks we chose in our experiments are POS tagging and NER, but our approach is general and can easily be applied to other classification tasks. 3.2 Active learning Active learning (Cohn et al., 1996) is a semisupervised framework where a machine learner is trained on a small set of carefully selected instances that are informative for the learning process, and thus yield the same accuracy as when training the learner on a larger set of randomly chosen examples. The main objective is to save time and money by minimising the need for manual annotation. Many different measures of informativeness as well as selection strategies for AL have been proposed in the literature, amongst them query-by-committee learning (Seung et al., 1992). The query-by-committee (QBC) approach uses a classifier ensemble (or committee) and selects the instances that show maximal disagreement between the predictions of the committee members. These instances are assumed to provide new information for the learning process, as the classifiers are most unsure about how to label them. The selected instances are then presented to the oracle (the human annotator), to be manually disambiguated and added to the training data. Then the classifier committee is retrained on the extended training set and the next AL iteration starts. The query-by-committee strategy calls to mind previous work on error detection in manually labelled text that made use of disagreements between the predictions of a classifier ensemble and the manually assigned tag, to identify potential annotation errors in the data (Loftsson, 2009). This approach works surprisingly well, and the tradeoff between precision and recall can be balanced by adding a threshold (i.e. by considering all instances where at least N of the ensemble classifiers disagree with the manually assigned label). Loftsson (2009) reports a precision of around 16% for using a committee of five POS taggers to identify annotation errors (see section 2). Let us assume we follow this approach and apply a tagger with an average accuracy of 97% to a corpus with 100,000 tokens. We can then expect around 3,000 incorrectly tagged instances in the data. Trying to identify these with a precision of 16% means that when looking at 1,000 instances of potential errors, we can only expect to see around 160 true positive cases, and we would have to check a large amount of data in order to correct a substantial part of the annotation noise. This means that this approach is not feasible for correcting large automatically annotated data. It is thus essential to improve precision and recall for error detection, and our goal is to minimise the number of instances that have to be manually checked while maximizing the number of true errors in the candidate set. In what follows we show how we can achieve this by using active learning to guide variational inference for error detection. 3.3 Guiding variational inference with AL Variational inference is a method from calculus where the posterior distribution over a set of unobserved random variables Y is approximated by a variational distribution Q(Y ). We start with some observed data X (a set of predictions made by our committee of classifiers) The distribution of the true labels Y = {y1, y2, ..., yn} is unknown. As it is too difficult to work with the posterior p(y|x), we try to approximate it with a much simpler distribution q(y) which models y for each observed x. To that end, we define a family Q of distributions that are computationally easy to work with, and pick the q in Q that best approximates the posterior, where q(y) is called the variational approximation to the posterior p(y|x). 1162 For computing variational inference, we use the implementation of Hovy et al. (2013)2 who jointly optimise p and q using variational EM. They alternate between adjusting q given the current p (Estep) and adjusting p given the current q (M-step). In the E-step, the objective is to find the q that minimises the divergence between the two distributions, D(q||p). In the M-step, we keep q fixed and try to adjust p. The two steps are repeated until convergence. We extend the model for use in AL as follows (Algorithm 1). We start with the predictions from a classifier ensemble and learn a variational inference model on the data (lines 2-15). We then use the posterior entropies according to the current model, and select the c instances with the highest entropies for manual validation. These instances are presented to the oracle who assigns the true label. We save the predictions made by the human annotator and, in the next iteration, use them in the variational E-step as a prior to guide the learning process. In addition, we randomly pick one of the classifiers and update its prediction by replacing the classifier’s prediction with the label we obtained from the oracle.3 In the next iteration, we train the variational model on the updated predictions. By doing this, we also gradually improve the quality of the input to the variational model. In a typical AL approach, the main goal is to improve the classifiers’ accuracy on new data. In contrast to that, our approach aims at increasing precision and recall for error detection in automatically labelled data, and thus at minimising the time needed for manual correction. Please note that in our model we do not need to retrain the classifiers used for predicting the labels but only retrain the model that determines which of the classifiers’ predictions we can trust. This is crucial as it saves time and makes it easy to integrate the approach in a realistic scenario with a real human annotator in the loop. 4 Data and setup In our first experiment (§5.1) we want to assess the benefits of our approach for finding POS errors in standard newspaper text (in-domain setting) where 2MACE is available for download from http://www.isi.edu/publications/licensed-sw/mace 3We also experimented with updating more than one classifier, which resulted in lower precision and recall. We take this as evidence for the importance of keeping the variance in the predictions high. we have plenty of training data. For this setting, we use the English Penn Treebank, annotated with parts-of-speech, for training and testing. In the second experiment (§5.2) we apply our method in an out-of-domain setting where we want to detect POS errors in text from new domains where no training data is yet available (outof-domain setting). For this we use the Penn Treebank as training data, and test our models on data from the English Web treebank (Bies et al., 2012). To test our method on a different task and a new language, we apply it to Named Entity Recognition (NER) (experiment 3, §5.3), using out-ofdomain data from the Europarl corpus.4 The data was created by Faruqui and Pado (2010) and includes the first two German Europarl session transcripts, manually annotated with NER labels according to the CoNLL 2003 annotation guidelines (Tjong Kim Sang and De Meulder, 2003). The first three experiments are simulation studies. In our last experiment (§5.4), we show that our method also works well in a real AL scenario with a human annotator in the loop. For this we use the out-of-domain setting from the second experiment and let the annotators correct POS errors in two web genres (answers, weblogs) from the English Web treebank. 4.1 Tools for preprocessing For the POS tagging experiments, we use the following taggers to predict the labels: • bi-LSTM-aux (Plank et al., 2016) • HunPos (Hal´acsy et al., 2007) • Stanford postagger (Toutanova et al., 2003) • SVMTool (Gim´enez and M`arquez, 2004) • TreeTagger (Schmid, 1999) • TWeb (Ma et al., 2014) • Wapiti (Lavergne et al., 2010) The taggers implement a range of different algorithms, including HMMs, decision trees, SVMs, maximum entropy and neural networks. We train the taggers on subsets of 20,000 sentences extracted from the standard training set of the PTB (sections 00-18)5 and use the development and test set (sections 19-21 and 22-24) for testing. The training times of the taggers vary considerably, ranging from a few seconds (HunPos) to several 4The NER taggers have been trained on written German data from the HGC and DeWaC corpora (see §4.1). 5For taggers that use a development set during training, we also extract the dev data from sections 00-18 of the PTB. 1163 hours. This is a problem for the typical AL setting where it is crucial not to keep the human annotators waiting for the next instance while the system retrains. A major advantage of our setup is that we do not need to retrain the baseline classifiers as we only use them once, for preprocessing, before the actual error detection starts. For the NER experiment, we use tools for which pretrained models for German are available, namely GermaNER (Benikova et al., 2015), and the StanfordNER system (Finkel and Manning, 2009) with models trained on the HGC and the DeWaC corpus (Baroni et al., 2009; Faruqui and Pad´o, 2010).6 4.2 Evaluation measures We report results for different evaluation measures to asses the usefulness of our method. First, we report tagger accuracy on the data, obtained during preprocessing (figure 1). This corresponds to the accuracy of the labels in the corpus before error correction (baseline accuracy). Label accuracy measures the accuracy of the labels in the corpus after N iterations of error correction. Please note that we do not retrain the tools used for preprocessing, but assess the quality of the data after N iterations of manual inspection and correction. We also report precision and recall for the error detection itself. True positives (tp) refers to the number of instances selected for correction during AL that were actual annotation errors. We compute Error detection (ED) precision as the number of true positives divided by the number of all instances selected for error correction during N iterations of AL, and recall as the ratio of correctly identified errors to all errors in the data. 4.3 Baseline accuracies Table 1 shows the accuracies for the individual POS taggers used in experiments 1, 2 and 4. Please note that this is not a fair comparison as each tagger was trained on a different randomly sampled subset of the data and, crucially, we did not optimise any of the taggers but used default settings in all experiments.7 The accuracies of the 6To increase the number of annotators we use an older version of the StanfordNER (2009-01-16) and a newer version (2015-12-09), with both the DeWaC and HGC models, resulting in a total of 5 annotators for the NER task. 7Please note that the success of our method relies on the variation in the ensemble predictions, and thus improving the accuracies for preprocessing is not guaranteed to improve precision for the error detection task. Annotation matrix: c1 c2 ... cn DT DT ... DT N NE ... N V V ... V ... ... ... ... EVAL: tagger acc. Classifiers: c1, c2, ..., cn EVAL: ED precision, recall, #true pos EVAL: label accuracy QBC VI-AL entropy posterior entropy Oracle Select instances get label Output after N iterations: update matrix retrain VI QBC VI-AL majority vote VI prediction cQBC DT N V ... cV I−AL DT NE V ... EVAL Evaluation measures used in the experiments tagger acc Accuracy of preprocessing classifiers on the data. label acc Label accuracy in the corpus after N iterations of AL. true pos No. of instances selected for correction that are true errors. ED prec No. of true pos. / all instances selected for error correction. recall Correctly identified errors / all errors in the corpus. Preprocessing AL for N iterations Output Figure 1: Error detection procedure and overview over different evaluation measures for assessing the quality of error identification. baseline taggers vary between 94-97%, with an average accuracy of 95.8%. The majority baseline yields better results than the best individual tagger, with an accuracy of 97.3%. Importantly, the predictions made by the variational inference model (MACE) are in the same range as the majority baseline and thus do not improve over the 1164 Tagger Acc. bilstm 97.00 hunpos 96.18 stanford 96.93 svmtool 95.86 treetagger 94.35 tweb 95.99 wapiti 94.52 avg. 95.83 majority vote 97.28 MACE 97.27 Table 1: Tagger accuracies for POS taggers trained on subsamples of the WSJ with 20,000 tokens (for the majority vote, ties were broken randomly). majority vote on the automatically labelled data. To be able to run the variational inference model in an AL setting, we limit the size of the test data (the size of the pre-annotated data to be corrected) to batches of 5,000 tokens. This allows us to reduce the training time of the variational model and avoid unnecessary waiting times for the oracle. For NER (experiment 3), in contrast to POS tagging, we have a much smaller label set with only 5 labels (PER, ORG, LOC, MISC, O), and a highly skewed distribution where most of the instances belong to the negative class (O). To ensure a sufficient number of NEs in the data, we increase the batch size and use the whole out-of-domain testset with 4,395 sentences in the experiment.8 The overall accuracies of the different NER models are all in the range of 97.7-98.6%. Results for individual classes, however, vary considerably between the different models. 5 Results 5.1 Experiment 1: In-domain setting In our first experiment, we explore the benefits of our AL approach to error detection in a setting where we have a reasonably large amount of training data, and where training and test data come from the same domain (in-domain setting). We implement two selection strategies. The first one is a Query-by-Committee approach (QBC) where we use the disagreements in the predictions of our tagger ensemble to identify potential errors. For each instance i, we compute the entropy over the predicted labels M by the 7 taggers and select 8This is possible because, given the lower number of class labels, the training time for the VI-AL model for NER is much shorter than for the POS data. QBC VI-AL N label acc ED prec label acc ED prec 0 97.58 97.56 100 97.84 13.0 98.42 41.0 200 97.86 7.0 98.90 33.0 300 97.90 5.3 99.16 26.3 400 97.82 3.0 99.26 21.0 500 97.92 3.4 99.34 17.6 Table 2: Label accuracies on 5,000 tokens of WSJ text after N iterations, and precision for error detection (ED prec). the N instances with the highest entropy (Equation 1). H = − M X m=1 P(yi = m) log P(yi = m) (1) For each selected instance, we then replace the label predicted by majority vote with the gold label. Please note that the selected instances might already have the correct label, and thus the replacement does not necessarily increase accuracy but only does so when the algorithm selects a true error. We then evaluate the accuracy of the majority predictions after updating the N instances ranked highest for entropy9 (figure 1). We compare the QBC setting to our integrated approach where we guide the generative model with human supervision. Here the instances are selected according to their posterior entropy as assigned by the variational model, and after being disambiguated by the oracle, the predictions of a randomly selected classifier are updated with the oracle tags. We run the AL simulation for 500 iterations10 and select one new instance in each iteration. After replacing the predicted label for this instance by the gold label, we retrain the variational model and select the next instance, based on the new posterior probabilities learned on the modified dataset. We refer to this setting as VIAL. Table 2 shows POS tag accuracies (lab-acc) after N iterations of active learning. For the QBC setting, we see a slight increase in label accuracy of 0.3% (from 97.6 to 97.9) after manually validating 10% of the instances in the data. For the first 100 instances, we see a precision of 13% for error 9Please recall that, in contrast to a traditional QBC active learning approach, we do not retrain the classifiers but only update the labels predicted by the classifiers. 10We stopped after 500 iterations as this was enough to detect nearly all errors in the WSJ data. 1165 answer email newsg. review weblog bilstm 85.5 84.2 86.5 86.9 89.6 hun 88.5 87.4 89.2 89.7 92.2 stan 89.0 88.1 89.9 90.7 93.0 svm 87.4 86.1 88.2 88.8 91.3 tree 86.8 85.6 87.1 88.7 87.4 tweb 88.2 87.1 88.5 89.3 92.0 wapiti 85.2 82.4 84.6 86.5 87.3 avg. 87.2 85.8 87.7 88.7 90.4 major. 87.4 88.8 89.1 90.9 93.8 MACE 87.4 88.6 89.1 91.0 93.9 Table 3: Tagger accuracies on different web genres (trained on the WSJ); avg. accuracy, accuracy for majority vote (major.), and accuracy for MACE. detection. In the succeeding iterations, the precision slowly decreases as it gets harder to identify new errors. We even observe a slight decrease in label accuracy after 400 iterations that is due to the fact that ties are broken randomly and thus the vote for the same instance can vary between iterations. Looking at the AL setting with variational inference, we also see the highest precision for identifying errors during the first 100 iterations. However, the precision for error dection is more than 3 times as high as for QBC (41% vs. 13%), and we are still able to detect new errors during the last 100 iterations. This results in an increase in POS label accuracy in the corpus from 97.56% to 99.34%, a near perfect result. To find out what error types we were not able to identify, we manually checked the remaining 33 errors that we failed to detect in the first 500 iterations. Most of those are cases where an adjective (JJ) was mistaken for a past participle (VBN). (2) Companies were closedJJ/V BN yesterday Manning (2011), who presents a categorization of the type of errors made by a state-of-the-art POS tagger on the PTB, refers to the error type in example (2) as underspecified/unclear, a category that he applies to instances where “the tag is underspecified, ambiguous, or unclear in the context”. These cases are also hard to disambiguate for human annotators, so it is not surprising that our system failed to detect them. 5.2 Experiment 2: Out-of-domain setting In the second experiment, we test how our approach performs in an out-of-domain setting. For this, we use the English Web treebank (Bies et al., N answer email newsg review weblog 0 87.4 88.6 89.1 91.0 93.9 100 88.9 90.0 90.4 92.2 95.2 200 90.3 91.1 91.3 93.4 96.2 300 91.6 92.2 92.0 94.4 97.2 400 92.9 93.3 92.8 95.4 97.5 500 93.9 94.0 93.5 96.0 97.8 600 94.8 94.9 93.9 96.5 97.9 700 95.6 95.6 94.1 96.9 98.0 800 96.2 95.9 94.7 97.3 98.4 900 96.7 96.2 94.9 97.7 98.6 1000 97.0 96.8 95.1 97.9 98.6 Table 4: Increase in POS label accuracy on the web genres (5,000 tokens) after N iterations of error correction with VI-AL. 2012), a corpus of over 250,000 words of English weblogs, newsgroups, email, reviews and question-answers manually annotated for parts-ofspeech and syntax. Our objective is to develop and test a method for error detection that can also be applied to out-of-domain scenarios for creating and improving language resources when no indomain training data is available. We thus abstain from retraining the taggers on the web data and use the tools and models from experiment 1 (§5.1) as is, trained on the WSJ. As the English Web treebank uses an extended tagset with additional tags for URLs and email addresses etc., we allow the oracle to assign new tags unknown to the preprocessing classifiers. In a traditional AL setting, this would not be possible, as all class labels have to be known from the start. In our setting, however, this can be easily implemented. For each web genre, we extract samples of 5,000 tokens and run an active learning simulation with 500 iterations, where in each iteration one new instance is selected and disambiguated. After each iteration, we update the variational model and the predictions of a randomly selected classifier, as described in Section 5.1. Table 3 shows the performance of the WSJtrained taggers on the web data. As expected, the results are much lower than the ones from the indomain setting. This allows us to explore the behaviour of our error detection approach under different conditions, in particular to test our approach on tag predictions of a lower quality. The last three rows in Table 3 give the average tagger accuracy, the accuracy for the majority vote for the ensemble (not to be confused with QBC), and the accuracy we get when using the predictions from the variational model without AL (MACE). 1166 QBC VI-AL N # tp ED prec rec # tp ED prec rec 100 85 85.0 13.5 75 75.0 11.9 200 148 74.0 23.5 146 73.0 23.2 300 198 66.0 31.4 212 70.7 33.6 400 239 59.7 37.9 278 69.5 44.1 500 282 56.4 44.8 323 64.6 51.3 600 313 52.2 49.7 374 62.3 59.4 700 331 47.3 52.5 412 58.9 65.4 800 355 44.4 56.3 441 55.1 70.0 900 365 40.6 57.9 465 51.7 73.8 1000 371 37.1 58.9 484 48.4 76.8 Table 5: No. of true positives (# tp), precision (ED prec) and recall for error detection on 5,000 tokens from the answers set after N iterations. We can see that the majority baseline often, but not always succeeds in beating the best individual tagger. Results for MACE are more or less in the same range as the majority vote, same as in experiment 1, but do not improve over the baseline. Next, we employ AL in the out-of-domain setting (Tables 4, 5 and 6). Table 4 shows the increase in POS label accuracy for the five web genres after running N iterations of AL with variational inference (VI-AL). Table 5 compares the results of the two selection strategies, QBC and VI-AL, on the answers subcorpus after an increasing number of AL iterations.11 Table 6 completes the picture by showing results for error detection for all web genres, for QBC and VI-AL, after inspecting 10% of the data (500 iterations). Table 4 shows that using VI-AL for error detection results in a substantial increase in POS label accuracy for all genres. VI-AL still detects new errors after a high number of iterations, without retraining the ensemble taggers. This is especially useful in a setting where no labelled target domain data is yet available. Table 5 shows the number of true positives amongst the selected error candidates as well as precision and recall for error detection for different stages of AL on the answers genre. We can see that during the early learning stages, both selection strategies have a high precision and QBC beats VI-AL. After 200 iterations it becomes more difficult to detect new errors, and the precision for both methods decreases. The decrease, however, is much slower for VI-AL, leading to higher precision after the initial rounds of training, and the gap in results becomes more and more pronounced. 11Due to space restrictions, we can only report detailed results for one web genre. Results for the other web genres follow the same trend (see Tables 4 and 6). QBC VI-AL # tp ED prec rec # tp ED prec rec answer 282 56.4 44.8 323 64.6 51.3 email 264 52.8 47.1 261 52.2 46.6 newsg. 195 39.0 36.0 214 42.8 39.6 review 227 45.4 49.7 255 51.0 55.8 weblog 166 33.2 54.6 196 39.2 64.5 Table 6: No. of true positives (# tp), precision (ED prec) and recall for error detection on 5,000 tokens after 500 iterations on all web genres. After 600 iterations, VI-AL beats QBC by more than 10%, thus resulting in a lower number of instances that have to be checked to obtain the same POS accuracy in the final dataset. Looking at recall, we see that by manually inspecting 10% of the data VI-AL manages to detect more than 50% of all errors, and after validating 20% of the data, we are able to eliminate 75% of all errors in the corpus. In contrast, QBC detects less than 60% of the annotation errors in the dataset. In the out-of-domain setting where we start with low-quality POS predictions, we are able to detect errors in the data with a much higher precision than in the in-domain setting, where the number of errors in the dataset is much lower. Even after 1,000 iterations, the precision for error detection is close to 50% in the answers data. Table 6 shows that the same trend appears for the other web genres, where we observe a substantially higher precision and recall when guiding AL with variational inference (VI-AL). Only on the email data are the results below the ones for QBC, but the gap is small. 5.3 Experiment 3: A new task (and language) We now want to test if the approach generalises well to other classification tasks, and also to new languages. To that end, we apply our approach to the task of Named Entity Recognition (NER) on German data (§4). Table 7 shows results for error detection for NER. In comparison to the POS experiments, we observe a much lower recall, for both QBC and VIAL. This is due to the larger size of the NER testset which results in a higher absolute number of errors in the data. Please bear in mind that recall is computed as the ratio of correctly identified errors to all errors in the testset (here we have a total of 110,405 tokens in the test set which means that we identified >35% of all errors by querying less than 1% of the data). Also note that the overall number of errors is higher in the QBC setting (1,756 1167 QBC VI-AL N # tp ED prec rec # tp ED prec rec 100 54 54.0 3.1 76 76.0 4.7 200 113 56.5 6.4 155 77.5 9.6 300 162 54.0 9.2 217 72.3 13.4 400 209 52.2 11.9 297 74.2 18.2 500 274 54.8 15.6 352 70.4 22.3 600 341 56.8 19.4 409 68.2 25.5 700 406 58.0 23.1 452 64.6 27.8 800 480 60.0 27.3 483 60.4 29.8 900 551 61.2 31.4 512 56.9 31.9 1000 617 61.7 35.1 585 58.5 35.8 1000 remaining errors:1,139 remaining errors:1,043 Table 7: Error detection results on the GermEval 2014 NER testset after N iterations (true positives, ED precision and recall). errors) than in the VI-AL setting (1,628 errors), as in the first setting we used a majority vote for generating the data pool while in the second setting we relied on the predictions of MACE. For POS tagging, we did not observe a difference between the initial data pools (Table 3). For NER, however, the initial predictions of MACE are better than the majority vote. During the first 800 iterations, precision for VIAL is much higher than for QBC, but then slowly decreases. For QBC, however, we see the opposite trend. Here precision stays in the range of 52-56% for the first 600 iterations. After that, it slowly increases, and during the last iterations QBC precision outperforms VI-AL. Recall, however, is higher for the VI-AL model, for all iterations. This means that even if precision is slightly lower than in the QBC setting after 800 iterations, it is still better to use the VI-AL model. For comparison, in the QBC setting we still have 1,139 errors left in the corpus after 1,000 iterations, while for VI-AL the number of errors remaining in the data is much lower (1,043). 5.4 Experiment 4: A real-world scenario In our final experiment, we test our approach in a real-world scenario with a human annotator in the loop. To that end, we let two linguistically trained human annotators correct POS errors identified by AL. We use the out-of-domain data from experiment 2 (§5.2), specifically the answers and weblog subcorpora. We run two VI-AL experiments where the oracle is presented with new error candidates for 500 iterations. The time needed for correction was 135 minutes (annotator 1, answers) and 157 minutes (annotator 2, weblog) for correcting 500 instances VI-AL with human annotator answers weblog N # tp ED prec rec # tp ED prec rec 100 71 68.0 10.8 62 62.0 20.3 200 103 63.5 20.2 112 56.0 36.7 300 177 58.0 27.6 156 52.0 51.1 400 224 55.3 35.1 170 42.5 55.7 500 259 51.2 40.6 180 36.0 59.0 Table 8: POS results for VI-AL with a human annotator on 2 web genres (true positives, precision and recall for error detection on 5,000 tokens) each. This includes the time needed to consult the annotation guidelines, as both annotators had no prior experience with the extended English Web treebank guidelines. We expect that the amount of time needed for correction will decrease when the annotators become more familiar with the annotation scheme. Results are shown in Table 8. As expected, precision as well as recall are lower for the human annotators as compared to the simulation study (Table 6). However, even with some annotation noise we were able to detect more than 40% of all errors in the answers data and close to 60% of all errors in the weblog corpus, by manually inspecting only 10% of the data. This results in an increase in POS label accuracy from 88.8 to 92.5% for the answers corpus and from 93.9 to 97.5% for the weblogs, which is very close to the 97.8% we obtained in the simulation study (Table 4). 6 Conclusions In the paper, we addressed a severely understudied problem, namely the detection of errors in automatically annotated language resources. We present an approach that combines an unsupervised generative model with human supervision in an AL framework. Using POS tagging and NER as test cases, we showed that our model can detect errors with high precision and recall, and works especially well in an out-of-domain setting. Our approach is language-agnostic and can be used without retraining the classifiers, which saves time and is of great practical use in an AL setting. We also showed that combining an unsupervised generative model with human supervision is superior to using a query-by-committee strategy for AL. Our system architecture is generic and can be applied to any classification task, and we expect it to be of use in many annotation projects, especially when dealing with non-standard data or in out-of-domain settings. 1168 Acknowledgments This research has been conducted within the Leibniz Science Campus “Empirical Linguistics and Computational Modeling”, funded by the Leibniz Association under grant no. SAS-2015-IDS-LWC and by the Ministry of Science, Research, and Art (MWK) of the state of Baden-W¨urttemberg. References Bharat Ram Ambati, Mridul Gupta, Rahul Agarwal, Samar Husain, and Dipti Misra Sharma. 2011. Error detection for treebank validation. In Proceedings of the 9th Workshop on Asian Language Resources. Chiang Mai, Thailand, ALR9, pages 23–30. Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The WaCky wide web: a collection of very large linguistically processed web-crawled corpora. Language Resources and Evaluation 43(3):209–226. Eyal Beigman and Beata Beigman Klebanov. 2009. Learning with annotation noise. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. Suntec, Singapore, ACL’09, pages 280–287. Alan Joseph Bekker and Jacob Goldberger. 2016. Training deep neural-networks based on unreliable labels. In Proceedings of IEEE International Conference on Acoustic, Speech and Signal Processing. ICASSP. Darina Benikova, Seid Muhie Yimam, Prabhakaran Santhanam, and Chris Biemann. 2015. GermaNER: Free open German Named Entity Recognition tool. In Proceedings of the International Conference of the German Society for Computational Linguistics and Language Technology (GSCL’15). Essen, Germany, pages 31–38. Jiang Bian, Yandong Liu, Ding Zhou, Eugene Agichtein, and Hongyuan Zha. 2009. Learning to recognize reliable users and content in social media with coupled mutual reinforcement. In Proceedings of the 18th International Conference on World Wide Web. Madrid, Spain, WWW’09, pages 51–60. Ann Bies, Justin Mott, Colin Warner, and Seth Kulick. 2012. English Web Treebank. Technical Report LDC2012T13, Philadelphia: Linguistic Data Consortium. David Cohn, Zoubin Ghahramani, and Michael Jordan. 1996. Active learning with statistical models. Journal of Artificial Intelligence Research 4:129–145. A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B 39(1):1–38. Marcus Dickinson and Detmar W. Meurers. 2003. Detecting errors in part-of-speech annotation. In Proceedings of the 10th Conference of the European Chapter of the Association for Computational Linguistics. Budapest, Hungary, EACL’03, pages 107– 114. Eleazar Eskin. 2000. Automatic corpus correction with anomaly detection. In Proceedings of the 1st Conference of the North American Chapter of the Association for Computational Linguistics. NAACL’00, pages 148–153. Manaal Faruqui and Sebastian Pad´o. 2010. Training and evaluating a German Named Entity Recognizer with semantic generalization. In Proceedings of the Conference on Natural Language Processing. KONVENS’10, pages 129–133. Jenny Rose Finkel and Christopher D. Manning. 2009. Nested Named Entity Recognition. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. EMNLP’09, pages 141–150. Jes´us Gim´enez and Llu´ıs M`arquez. 2004. SVMTool: A general POS tagger generator based on Support Vector Machines. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04). Lisbon, Portugal, LREC, pages 43–46. P´eter Hal´acsy, Andr´as Kornai, and Csaba Oravecz. 2007. HunPos: An open source trigram tagger. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions. Prague, Czech Republic, ACL’07, pages 209–212. Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, and Eduard Hovy. 2013. Learning whom to trust with MACE. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Atlanta, Georgia, USA, NAACL-HLT’13, pages 1120–1130. Mark Johnson. 2007. Why doesn’t EM find good HMM pos-taggers? In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Prague, Czech Republic, EMNLP’07, pages 296–305. Pavel Kveton and Karel Oliva. 2002. (Semi-)automatic detection of errors in pos-tagged corpora. In Proceedings of the 19th International Conference on Computational Linguistics. Taipei, Taiwan, COLING’02, pages 1–7. Thomas Lavergne, Olivier Capp´e, and Franc¸ois Yvon. 2010. Practical very large scale CRFs. In Proceedings the 48th Annual Meeting of the Association for Computational Linguistics. Uppsala, Sweden, ACL’10, pages 504–513. Hrafn Loftsson. 2009. Correcting a POS-tagged corpus using three complementary methods. In Proceedings of the 12th Conference of the European Chapter 1169 of the ACL. Athens, Greece, EACL’09, pages 523– 531. Ji Ma, Yue Zhang, and Jingbo Zhu. 2014. Tagging the web: Building a robust web tagger with neural network. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. Baltimore, Maryland, ACL’14, pages 144–154. Christopher D. Manning. 2011. Part-of-speech tagging from 97linguistics? In Proceedings of the 12th International Conference on Computational Linguistics and Intelligent Text Processing. Tokyo, Japan, CICLing’11, pages 171–189. Barbara Plank, Anders Søgaard, and Yoav Goldberg. 2016. Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Berlin, Germany, ACL’16, pages 412–418. Ines Rehbein. 2014. POS error detection in automatically annotated corpora. In Proceedings of the 8th Linguistic Annotation Workshop. LAW VIII, pages 20–28. Dennis Reidsma and Jean Carletta. 2008. Reliability measurement without limits. Computational Linguistics 34(3):319–326. Vitor Rocio, Joaquim Silva, and Gabriel Lopes. 2007. Detection of strange and wrong automatic partof-speech tagging. In Proceedings of the Aritficial Intelligence 13th Portuguese Conference on Progress in Artificial Intelligence. Guimar˜aes, Portugal, EPIA07, pages 683–690. Helmut Schmid. 1999. Improvements in part-ofspeech tagging with an application to German. In Susan Armstrong, Kenneth Church, Pierre Isabelle, Sandra Manzi, Evelyne Tzoukermann, and David Yarowsky, editors, Natural Language Processing Using Very Large Corpora, Kluwer Academic Publishers, Dordrecht, volume 11 of Text, Speech and Language Processing, pages 13–26. H. Sebastian Seung, Manfred Opper, and Haim Sompolinsky. 1992. Query by committee. In Proceedings of the Fifth Annual Workshop on Computational Learning Theory. Pittsburgh, Pennsylvania, USA, COLT’92, pages 287–294. Victor Sheng, Foster Provost, and Panagiotis G. Ipeirotis. 2008. Get another label? Improving data quality and data mining using multiple, noisy labelers. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. KDD’08, pages 614–622. Rion Snow, Brendan O’Connor, Daniel Jurafsky, and Andrew Y. Ng. 2008. Cheap and fast—but is it good?: Evaluating non-expert annotations for natural language tasks. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Honolulu, Hawaii, EMNLP’08, pages 254–263. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Walter Daelemans and Miles Osborne, editors, Proceedings of the SIGNLL Conference on Computational Natural Language Learning. Edmonton, Canada, CoNLL’03, pages 142–147. Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1. Edmonton, Canada, NAACL’03, pages 173–180. Hans van Halteren. 2000. The detection of inconsistency in manually tagged text. In Proceedings of the COLING-2000 Workshop on Linguistically Interpreted Corpora. Centre Universitaire, Luxembourg, pages 48–55. Liyue Zhao, Gita Sukthankar, and Rahul Sukthankar. 2011. Incremental relabeling for active learning with noisy crowdsourced annotations. In Privacy, Security, Risk and Trust (PASSAT) and 2011 IEEE Third Inernational Conference on Social Computing. PASSAT and SocialCom, pages 728–733. 1170
2017
107
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1171–1181 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1108 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1171–1181 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1108 Abstractive Document Summarization with a Graph-Based Attentional Neural Model Jiwei Tan, Xiaojun Wan and Jianguo Xiao Institute of Computer Science and Technology, Peking University The MOE Key Laboratory of Computational Linguistics, Peking University {tanjiwei,wanxiaojun,xiaojianguo}@pku.edu.cn Abstract Abstractive summarization is the ultimate goal of document summarization research, but previously it is less investigated due to the immaturity of text generation techniques. Recently impressive progress has been made to abstractive sentence summarization using neural models. Unfortunately, attempts on abstractive document summarization are still in a primitive stage, and the evaluation results are worse than extractive methods on benchmark datasets. In this paper, we review the difficulties of neural abstractive document summarization, and propose a novel graph-based attention mechanism in the sequence-to-sequence framework. The intuition is to address the saliency factor of summarization, which has been overlooked by prior works. Experimental results demonstrate our model is able to achieve considerable improvement over previous neural abstractive models. The data-driven neural abstractive method is also competitive with state-of-the-art extractive methods. 1 Introduction Document summarization is a task to generate a fluent, condensed summary for a document, and keep important information. As a useful technique to alleviate the information overload people are facing today, document summarization has been extensively investigated. Efforts on document summarization can be categorized to extractive and abstractive methods. Extractive methods produce the summary of a document by extracting sentences from the original document. They have the advantage of producing fluent sentences and preserving the meaning of original documents, but also inevitably face the drawbacks of information redundancy and incoherence between sentences. Moreover, extraction is far from the way humans write summaries. On the contrary, abstractive methods are able to generate better summaries with the use of arbitrary words and expressions, but generating abstractive summaries is much more difficult in practice. Abstractive summarization involves sophisticated techniques including meaning representation, content organization, and surface realization. Each of these techniques has large space to be improved (Yao et al., 2017). Due to the immaturity of natural language generation techniques, fully abstractive approaches are still at the beginning and cannot always ensure grammatical abstracts. Recent neural networks enable an end-to-end framework for natural language generation. Success has been witnessed on tasks like machine translation and image captioning, together with the abstractive sentence summarization (Rush et al., 2015). Unfortunately, the extension of sentence abstractive methods to the document summarization task is not straightforward. Encoding and decoding for a long sequence of multiple sentences, currently still lack satisfactory solutions (Yao et al., 2017). Recent abstractive document summarization models are yet not able to achieve convincing performance, with a considerable gap from extractive methods. In this paper, we review the key factors of document summarization, i.e., the saliency, fluency, coherence, and novelty requirements of the generated summary. Fluency is what neural generation models are naturally good at, but the other factors are less considered in previous neural abstractive models. A recent study (Chen et al., 2016) starts to consider the factor of novelty, using a distraction mechanism to avoid redundancy. As far as we 1171 know, however, saliency has not been addressed by existing neural abstractive models, despite its importance for summary generation. In this work, we study how neural summarization models can discover the salient information of a document. Inspired by the graph-based extractive summarization methods, we introduce a novel graph-based attention mechanism in the encoderdecoder framework. Moreover, we investigate the challenges of accepting and generating long sequences for sequence-to-sequence (seq2seq) models, and propose a new hierarchical decoding algorithm with a reference mechanism to generate the abstractive summaries. The proposed method is able to tackle the constraints of saliency, nonredundancy, information correctness, and fluency under a unified framework. We conduct experiments on two large-scale corpora with human generated summaries. Experimental results demonstrate that our approach consistently outperforms previous neural abstractive summarization models, and is also competitive with state-of-the-art extractive methods. We organize the paper as follows. Section 2 introduces related work. Section 3 describes our method. In Section 4 we present the experiments and have discussion. Finally in Section 5 we conclude this paper. 2 Related Work 2.1 Extractive Summarization Methods Document summarization can be categorized to extractive methods and abstractive methods. Extractive methods extract sentences from the original document to form the summary. Notable early works include (Edmundson, 1969; Carbonell and Goldstein, 1998; McDonald, 2007). In recent years much progress has also been made under traditional extractive frameworks (Li et al., 2013; Dasgupta et al., 2013; Nishikawa et al., 2014). Neural networks have also been widely investigated on the extractive summarization task. Earlier works explore to use deep learning techniques in the traditional framework (Kobayashi et al., 2015; Yin and Pei, 2015; Cao et al., 2015a,b). More recent works predict the extraction of sentences in a more data-driven way. Cheng and Lapata (2016) propose an encoder-decoder approach where the encoder learns the representation of sentences and documents while the decoder classifies each sentence using an attention mechanism. Nallapati et al. (2017) propose a recurrent neural network (RNN)-based sequence model for extractive summarization of documents. Neural sentence extractive models are able to leverage large-scale training data and achieve performance better than traditional extractive summarization methods. 2.2 Abstractive Summarization Methods Abstractive summarization aims at generating the summary based on understanding the input text. It involves multiple subproblems like simplification, paraphrasing, and fusion. Previous research is mostly restricted in one or a few of the subproblems or specific domains (Woodsend and Lapata, 2012; Thadani and McKeown, 2013; Cheung and Penn, 2014; Pighin et al., 2014; Sun et al., 2015). As for neural network models, success is achieved on sentence abstractive summarization. Rush et al. (2015) train a neural attention model on a large corpus of news documents and their headlines, and later Chopra et al. (2016) extend their work with an attentive recurrent neural network framework. Nallapati et al. (2016) introduce various effective techniques in the RNN seq2seq framework. These neural sentence abstraction models are able to achieve state-of-the-art results on the DUC competition of generating headlinelevel summaries for news documents. Some recent works investigate neural abstractive models on the document summarization task. Cheng and Lapata (2016) also adopt a word extraction model, which is restricted to use the words of the source document to generate a summary, although the performance is much worse than the sentence extractive model. Nallapati et al. (2016) extend the sentence summarization model by trying a hierarchical attention architecture and a limited vocabulary during the decoding phase. However these models still investigate few properties of the document summarization task. Chen et al. (2016) first attempt to explore the novelty factor of summarization, and propose a distraction-based attentional model. Unfortunately these state-ofthe-art neural abstractive summarization models are still not competitive to extractive methods, and there are several problems remain to be solved. 3 Our Method 3.1 Overview In this section we introduce our method. We adopt an encoder-decoder framework, which is 1172 widely used in machine translation (Bahdanau et al., 2014) and dialog systems (Mou et al., 2016), etc. In particular, we use a hierarchical encoderdecoder framework similar to (Li et al., 2015), as shown in Figure 1. The main distinction of this work is that we introduce a graph-based attention mechanism which is illustrated in Figure 1b, and we propose a hierarchical decoding algorithm with a reference mechanism to tackle the difficulty of abstractive summary generation. In the following parts, we will first introduce the encoder-decoder framework, and then describe the graph-based attention and the hierarchical decoding algorithm. 3.2 Encoder The goal of the encoder is to map the input document to a vector representation. A document d is a sequence of sentences d = {si}, and a sentence si is a sequence of words si = {wi,k}. Each word wi,k is represented by its distributed representation ei,k, which is mapped by a word embedding matrix Ev. We adopt a hierarchical encoder framework, where we use a word encoder encword to encode the words of a sentence si into the sentence representation, and use a sentence encoder encsent to encode the sentences of a document d into the document representation. The input to the word encoder is the word sequence of a sentence, appended with an “<eos>” token indicating the end of a sentence. The word encoder sequentially updates its hidden state after receiving each word, as hi,k = encword(hi,k−1, ei,k). The last hidden state (after the word encoder receives “<eos>”) is denoted as hi,−1, and used as the embedding representation of the sentence si, denoted as xi. A sentence encoder is used to sequentially receive the embeddings of the sentences, given by hi = encsent(hi−1, xi). A pseudo sentence of an “<eod>” token is appended at the end of the document to indicate the end of the whole document. The hidden state after the sentence encoder receives “<eod>” is treated as the representation of the input document c = h−1. We use the Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) as both the word encoder encword and sentence encoder encsent. In particular, we adopt the variant of LSTM structure in (Graves, 2013). 3.3 Decoder with Attention The decoder is used to generate output sentences {s ′ j} according to the representation of the input sentences. We also use an LSTM-based hierarchical decoder framework to generate the summary, because the summary typically comprises several sentences. The sentence decoder decsent receives the document representation c as the initial state h ′ 0 = c, and predicts the sentence representations sequentially, by h ′ j = decsent(h ′ j−1, x ′ j−1), where x ′ j−1 is the encoded representation of the previously generated sentence s ′ j−1. The word decoder decword receives a sentence representation h ′ j as the initial state h ′ j,0 = h ′ j, and predicts the word representations sequentially, by h ′ j,k = decword(h ′ j,k−1, ej,k−1), where ej,k−1 is the embedding of the previously generated word. The predicted word representations are mapped to vectors of the vocabulary size dimension, and then normalized by a softmax layer as the probability distribution of generating the words in the vocabulary. A word decoder stops when it generates the “<eos>” token and similarly the sentence decoder stops when it generates the “<eod>” token. In primitive decoder models, c is the same for generating all the output words, which requires c to be a sufficient representation for the whole input sequence. The attention mechanism (Bahdanau et al., 2014) is usually introduced to alleviate the burden of remembering the whole input sequence, and to allow the decoder to pay different attention to different parts of input at different generation states. The attention mechanism sets a different cj when generating sentence j, by cj = P i αj ihi. αj i indicates how much the i-th original sentence si contributes to generating the j-th sentence. αj i is usually computed as: αj i = eη  hi,h ′ j  P l eη(hl,h′ j) (1) where η is the function modeling the relation between hi and h ′ j. η can be defined using various functions including η (a, b) = aT b, η (a, b) = aT Mb, and even a non-linear function achieved by a multi-layer neural network. In this paper we use η (a, b) = aT Mb where M is a parameter matrix. 3.4 Graph-based Attention Mechanism Traditional attention computes the importance score of a sentence si, when generating sentence s ′ j, according to the relation between the hidden state hi and current decoding state h ′ j, as shown 1173 word encoder sentence encoder <eod> <eod> word decoder sentence decoder 2h 1h 3h ' 1h ' 2h c ' 3h 1, 1 h  1,2 h 1,1 h ' 1,1 h ' 1,3 h ' 1,2 h (a) Traditional attention. word encoder sentence encoder <eod> <eod> word decoder sentence decoder 2h 1h 3h ' 1h ' 2h c ' 3h graph ranking model 1,1 h 1,2 h 1, 1 h  ' 1,1 h ' 1,2 h ' 1,3 h (b) Graph-based attention. Figure 1: Hierarchical encoder-decoder framework and comparison of the attention mechanisms. in Figure 1a. This attention mechanism is useful in scenarios like machine translation and image captioning, because the model is able to learn a relevance mapping between the input and output. However, for document summarization, it is not easy for the model to learn how to summarize the salient information of a document, i.e., which sentences are more important to a document. To tackle this challenge, we learn from graphbased extractive summarization models TextRank (Mihalcea and Tarau, 2004) and LexRank (Erkan and Radev, 2004), which are based on the PageRank (Page et al., 1999) algorithm. These unsupervised graph-based models show good ability to identify important sentences in a document. The underlying idea is that a sentence is important in a document if it is heavily linked with many important sentences (Wan, 2010). In graph-based extractive summarization, a graph G is constructed to rank the original sentences. The vertices V are the set of n sentences to be considered, and the edges E are the relations between the sentences, which are typically modeled by the similarity of sentences. Let W ∈ Rn×n be the adjacent matrix. Then the saliency scores of the sentences are determined by making use of the global information on the graph recursively, as: f (t + 1) = λWD−1f(t) + (1 −λ)y (2) where f = [f1, . . . , fn] ∈Rn denotes the rank scores of the n sentences. f(t) denotes the rank scores at the t-th iteration. D is a diagonal matrix with its (i, i)-element equal to the sum of the i-th column of W. Assume we use hi as the representation of si, and W(i, j) = hT i Mhj, where M is a parameter matrix to be learned. λ is a damping factor. y ∈Rn with all elements equal to 1/n. The solution of f can be calculated using the closedform: f = (1 −λ)(I −λWD−1)−1y (3) In the graph model, the importance score of a sentence si is determined by the relation between hi and the {hl} of all other sentences. Relatively, in traditional attention mechanisms, the importance (attention) score αj i is determined by the relation between hi and h ′ j, regardless of other original sentences. In our model we hope to combine the two effects, and compute the rank scores of the original sentences regarding h ′ j, so that the importance scores of original sentences are different when decoding different state h ′ j, denoted by fj. In our model we use the scores fj to compute the attention. Therefore, h ′ j should be considered in the graph model. Inspired by the query-focused graph-based extractive summarization model (Wan et al., 2007), we realize this by applying the idea of topic-sensitive PageRank (Haveliwala, 2002), which is to rank the sentences with the concern of their relevance to the topic. We treat the current decoding state h ′ j as the topic and add it into the graph as the 0-th pseudo-sentence. Given a topic T, the topic-sensitive PageRank is similar to Eq. 3 except that y becomes: yT = ( 1 |T| i ∈T 0 i /∈T (4) Therefore yT is always a one hot vector and only y0 = 1, indicating the 0-th sentence is s ′ j. Denote W j as the new adjacent matrix added with h ′ j, and Dj as the new diagonal matrix corresponding to W j. Then the convergence score vector fj contains the importance scores for all the 1174 input sentences when generating sentence s ′ j, as: fj = (1 −λ)(I −λW jDj−1)−1yT (5) The new scores fj can be used to compute the graph-based attention when decoding h ′ j, to find the sentences which are both globally important and relevant to current decoding state h ′ j. Inspired by (Chen et al., 2016) we adopt a distraction mechanism to compute the final attention value αj i, which subtracts the rank scores of the previous step, to penalize the model from attending to previously attended sentences, and also help to normalize the ranked scores fj. The graph-based attention is finally computed as: αj i = max(fj i −fj−1 i , 0) P l  max(fj l −fj−1 l , 0)  (6) where f0 is initialized with all elements equal to 1/n. The graph-based attention will only focus on those sentences ranked higher over the previous decoding step, so that it concentrates more on the sentences which are both salient and novel. Both Eq. 5 and Eq. 6 are differentiable; thus we can use the graph-based attention function Eq. 6 to replace the traditional attention function Eq. 1, and the neural model using the graph-based attention can also be trained using traditional gradientbased methods. 3.5 Model Training The loss function L of the model is the negative log likelihood of generating summaries over the training set D: L = X (Y,X)∈D −log p(Y |X; θ) (7) where X =  x1, . . . , x|X| and Y =  y1, . . . , y|Y | denote the word sequences of a document and its summary respectively, including the “<eos>” and “<eod>” tokens for structure information. Then log p(Y |X; θ) = |Y | X τ=1 log p (yτ| {y1, . . . , yτ−1} , c; θ) (8) and log p (yτ| {y1, . . . , yτ−1} , c; θ) is modeled by the LSTM encoder and decoder. We use the Adamax (Kingma and Ba, 2014) gradient-based optimization method to optimize the model parameters θ. 3.6 Decoding Algorithm We find there are several problems during the generation of summary, including out-of-vocabulary (OOV) words, information incorrectness, error accumulation and repetition. These problems make the generated abstractive summaries far from satisfactory. In this work, we propose a hierarchical decoding algorithm with a reference mechanism to tackle these difficulties, which effectively improves the quality of generated summaries. As OOV words frequently occur in name entities, we can first identify the entities of a document using NLP toolkit like Stanford CoreNLP1. Then we prefix every entity with an “@entity” token and a number indicating how many words the entity has. We hope the entity prefixes can help better deal with entities which have more than one word, and help improve the accuracy of recovering OOV words in entities. After decoding we recover the OOV words by matching entities in the original document according to the contexts. For the hierarchical decoder, a major challenge is that same sentences or phrases are often repeated in the output. A beam search strategy may help to alleviate the repetition in a sentence, but the repetition in the whole generated summary is remained a problem. The word-level beam search is not easy to be extended to the sentence level. The reason is that the K-best sentences generated by a word decoder will mostly be similar to each other, which is also noticed by Li et al. (2016). In this paper we propose a hierarchical beam search algorithm with a reference mechanism. The hierarchical algorithm comprises K-best word-level beam search and N-best sentencelevel beam search. At the word level, the only difference to vanilla beam search is that we add an additional term to the score ˜p(yτ) of generating word yτ, and now score(yτ) = ˜p(yτ) + γ (ref(Yτ−1 + yτ, s∗) −ref(Yτ−1, s∗)), where Yτ−1 = {y1, . . . , yτ−1} and ˜p(yτ) = log p (yτ|Yτ−1, c; θ). s∗is an original sentence to refer to. ref is a function which calculates the ratio of bigram overlap between two texts. The added term aims to favor the generated word yτ with improving the bigram overlap between current generated summary Yτ−1 and the target orig1http://stanfordnlp.github.io/CoreNLP/ 1175 Dataset Train Valid Test D.L. S. L. CNN 83568 1220 1093 29.8 3.54 DailyMail 196557 12147 10396 26.0 3.84 Table 1: The statistics of the two datasets. D.L. and S.L. indicate the average number of sentences in the document and summary, respectively. inal sentence s∗. At the word decoder level, the reference mechanism helps to both improve the information correctness and avoid redundancy. Because the reference score is based on the bigram overlap improvement to the whole generated summary Yτ−1, the awareness of previously generated sentences also helps alleviate sentence-level redundancy. A factor γ is introduced to control the influence of the reference mechanism. Note that because of the non-optimal search, the generated sentence will still be different to the original sentence even with an extremely large γ. At the sentence level, N-best sentence beam is to keep the N generated sentences by referring to N different original sentences, which have the highest attention scores and have not been used as a reference. With referring to N different sentences, the N candidate sentences are guaranteed diverse. Sentence-level beam search is realized by maximizing the accumulated score of all the sentences generated. 4 Experiments 4.1 Dataset We conduct experiments on two large-scale corpora of CNN and DailyMail, which have been widely used in neural document summarization tasks. The corpora are originally constructed in (Hermann et al., 2015) by collecting human generated abstractive highlights from the news stories in the CNN and DailyMail website. The statistics and split of the two datasets are listed in Table 1. 4.2 Implementation We use the corpora which are already provided with labeled entities (Nallapati et al., 2016). The documents and summaries are first lowercased and tokenized, and all digit characters are replaced with the “#” symbol, similar to (Nallapati et al., 2016, 2017). We keep the 40,000 most frequently occurring words and other words are replaced with the “<OOV>” token. We use Theano2 for implementation. For the word encoder and decoder we use three layers of LSTM, and for the sentence encoder and decoder we use one layer of LSTM. The dimension of hidden vectors are all 512. We use pre-trained GloVe (Pennington et al., 2014) vectors3 for the initialization of word vectors, which will be further trained in the model. The dimension of word vectors is 100. λ is set to 0.9. The parameters of Adamax are set to those provided in (Kingma and Ba, 2014). The batch size is set to 8 documents, and an epoch is set containing 10,000 randomly sampled documents. Convergence is reached within 200 epochs on the DailyMail dataset and 120 epochs on the CNN dataset. It takes about one day for every 30 epochs on a GTX-1080 GPU card. γ is tuned on the validation set and the best choice is 300. The beam sizes for word decoder and sentence decoder are 15 and 2, respectively. 4.3 Evaluation We adopt the widely used ROUGE (Lin, 2004) toolkit for evaluation. We first compare with the reported results in (Chen et al., 2016) including various traditional extractive methods and a state-of-the-art abstractive model (DistractionM3) on the CNN dataset, as shown in Table 2. Uni-GRU is a non-hierarchical seq2seq baseline model. In Table 3 we compare our method with the results of state-of-the-art neural summarization methods reported in recent papers. Extractive models include NN-SE (Cheng and Lapata, 2016) and SummaRuNNer (Nallapati et al., 2017), while SummaRuNNer-abs is also an extractive model similar to SummaRuNNer but is trained directly on the abstractive summaries. Moreover, we include several baselines for comparison, including the baselines reported in (Cheng and Lapata, 2016) although they are tested on 500 samples of the test set. LREG is a feature based method using linear regression. NN-ABS is a neural abstractive baseline which is a simple hierarchical extension of (Rush et al., 2015). NN-WE is the abstractive model which restricts the generation of words from the original document. Lead-3 is a strong extractive baseline that uses the lead three sentences as the summary. In Table 4 we compare our model with the abstractive attentional encoder-decoder models in 2https://github.com/Theano/Theano 3http://nlp.stanford.edu/projects/glove 1176 Method Rouge-1 Rouge-2 Rouge-L Lead-3 26.1 9.6 17.8 Luhn 23.2 7.2 15.5 Edmundson 24.5 8.2 16.7 LSA 21.2 6.2 14.0 LexRank 26.1 9.6 17.7 TextRank 23.3 7.7 15.8 Sum-basic 22.9 5.5 14.8 KL-sum 20.7 5.9 13.7 Uni-GRU 18.4 4.8 14.3 Distraction-M3 27.1 8.2 18.7 Our Method 30.3 9.8 20.0 Table 2: Comparison results on the CNN test set using the full-length F1 variants of Rouge. Method Rouge-1 Rouge-2 Rouge-L LREG(500) 18.5 6.9 10.2 NN-ABS(500) 7.8 1.7 7.1 NN-WE(500) 15.7 6.4 9.8 Lead-3 21.9 7.2 11.6 NN-SE 22.7 8.5 12.5 SummaRuNNer-abs 23.8 9.6 13.3 SummaRuNNer 26.2 10.8 14.4 Our Method 27.4 11.3 15.1 Table 3: Comparison results on the DailyMail test set using Rouge recall at 75 bytes. (Nallapati et al., 2016), which leverage several effective techniques and achieve state-of-the-art performance on sentence abstractive summarization tasks. The words-lvt2k and words-lvt2k-ptr are flat models and words-lvt2k-hieratt is a hierarchical extension. Results in Table 2 show our abstractive method is able to outperform traditional extractive methods and the distraction-based abstractive model. The results in Tables 3 and 4 show that our method has considerable improvement over neural abstractive baselines, and is able to outperform stateof-the-art neural extractive methods. An interesting observation is the results of the hierarchical model in Table 4 are lower than the flat models, which may demonstrate the difficulty for a traditional attention model to identify the important information in a document. We also conducted human evaluation on 20 random samples from the DailyMail test set and compared the summaries generated by our method with the outputs of Lead-3, NN-SE (Cheng and Method Rouge-1 Rouge-2 Rouge-L words-lvt2k 32.5 11.8 29.5 words-lvt2k-ptr 32.1 11.7 29.2 words-lvt2k-hieratt 31.8 11.6 28.7 Our Method 38.1 13.9 34.0 Table 4: Comparison results on the merged CNN/DailyMail test set using full-length F1 metric. Method Informative Concise Coherent Fluent Lead-3 3.60 3.75 4.16 3.85 NN-SE 3.85 3.70 3.48 3.78 Distraction 3.03 3.25 2.93 3.65 Our Method 3.93 3.82 3.53 3.80 Table 5: Human evaluation results. Lapata, 2016) and Distraction (Chen et al., 2016). The output summaries of NN-SE are provided by the authors, and the output summaries of Distraction are achieved by running the code provided by the authors on the DailyMail dataset. Three participants were asked to compare the generated summaries with the human summaries, and assess each summary from four independent perspectives: (1) How informative the summary is? (2) How concise the summary is? (3) How coherent (between sentences) the summary is? (4) How fluent, grammatical the sentences of a summary are? Each property is assessed with a score from 1 (worst) to 5 (best). The average results are presented in Table 5. As shown in Table 5, our method consistently outperforms the previous state-of-the-art abstractive method Distraction. Compared with extractive methods, our method is able to generate more informative and concise summaries, which shows the advantage of abstractive methods. The Distraction method in fact usually produces the shortest summaries, but the conciseness score is low mainly because sometimes it generates repeated sentences. The repetition also causes Distraction to achieve a low coherence score. Concerning coherence and fluency, our abstractive method achieves slightly better scores than NN-SE, while not surprisingly Lead-3 gets the best scores. The fluency scores show the good ability of the abstractive model to generate fluent and grammatical sentences. 1177 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.99 ¸ 10.0 10.5 11.0 11.5 12.0 12.5 13.0 13.5 14.0 14.5 Rouge-2 F-score within 200 epochs within 300 epochs (a) Rouge-2 F1 score vs. λ. 0 1 10 100 200 300 400 500 1e3 1e4 ° 6 8 10 12 14 Rouge-2 F-score (b) Rouge-2 F1 score vs. γ. Figure 2: Results of different setting of hyperparameters tested on 500 samples from the DailyMail test set. 4.4 Model Validation We conduct experiments to see how the model’s performance is affected by the choice of the hyperparameters. For efficiency we test on 500 random samples from the DailyMail test set. Figure 2a shows the maximum average Rouge-2 F1-score achieved when the model is trained using different λ values within 200 and 300 epochs. When using a larger λ, the performance is better and the convergence is faster. When λ = 1.0 the model fails to train because of running into a singular matrix. Figure 2b shows the results achieved when using different γ values in the hierarchical decoding algorithm. γ = 0 is the baseline of the traditional decoding algorithm which does not refer to the original document. The poor results indicate that even the model is able to learn to identify the salient information in the original document, the performance is limited by the model’s ability of generating a long output sequence. That may be a reason why simple extensions of seq2seq models fail on the abstractive document summarization task. The performance is significantly improved using a reasonable γ, and the optimal γ value is consistent with the one chosen on the validation set. When using an extremely large γ, the permanence begins to decrease, because the model will copy too much from the original document, and at this time the generated text also becomes less fluent. Results show that introducing the reference mechanism in the hierarchical beam search is very effective. The γ factor significantly affects the results, but the optimal value is easy to be decided on a validation set. We also conduct ablation experiments on the CNN dataset to verify the effectiveness of the proposed model. Results on the CNN test set are shown in Table 6. “w/o GraphAtt” is to replace Framework Rouge-1 Rouge-2 Rouge-L Our Method 30.3 9.8 20.0 w/o GraphAtt 29.2 9.0 19.0 w/o SentenceBeam 29.6 9.3 19.1 w/o BeamSearch 25.1 6.7 17.9 Table 6: Results of removing different components of our method on the CNN test set using the full-length F1 variants of Rouge. Two-tailed t-tests demonstrate the difference between Our Method and other frameworks are all statistically significant (p < 0.01). the graph-based attention by a traditional attention function. “w/o SentenceBeam” is to remove the sentence-level beam search. “w/o BeamSearch” is to remove both the sentence-level and word-level beam search, and use a greedy decoding algorithm with the reference mechanism. As seen from Table 6, the graph-based attention mechanism is significantly better than traditional attention mechanism for the document summarization task. Beam search helps significantly improve the generated summaries. Our proposed decoding algorithm enables a sentence-level beam search, which helps improve the generated summaries with multiple sentences. 4.5 Case Study We show the case study of a sample4 from the DailyMail test set in Figure 3. We show the “@entity” and number here although they are removed in the evaluation. We compare our result with the output by a model using traditional attention as Baseline Attention. We also show the output generated by a Baseline Decoder, which sets γ = 0 and does not use the sentence-level beam search, to study the difficulty for a traditional decoder to generate multiple sentences. Many observations can be found in Figure 3. The lead three sentences mainly focus on the money information and are not sufficient. As for the Baseline Decoder, first it usually ends the generation too early. The “<eod>” token indicates where the original output stops. When we force the decoder not to end here, the model shows the ability to continue producing the important information. However, two flaws are presented. First is the repetition of “## - year 4The original story and highlights can be found at http://www.dailymail.co.uk/news/article-3041766/Benefitscheat-pocketed-17-000-taxpayers-money.html 1178 Gold Summary: @entity 2 mary day , ## , claimed over £ ##,### in benefits despite not being eligible . she had £ ##,### savings in the bank which meant she was not entitled . day used taxpayers ’ money to go on luxury holidays to @entity 1 indian resort of @entity 1 goa . pleaded guilty to dishonestly claiming benefits and has paid back money . Lead3: a benefits cheat who pocketed almost £ ##,### of taxpayers ’ money and spent it on a string of luxury holidays despite having £ ##,### in the bank has avoided jail . @entity 2 mary day , ## , of @entity 1 swanage in @entity 1 dorset , used taxpayers ’ money to go on luxury holidays to the @entity 1 indian resort of @entity 1 goa for up to a month each time . day fraudulently claimed £ ##,### of income support and disability allowance despite having £ ##,### of her own savings in the bank . Baseline Decoder: ## - month - old @entity 2 mary day , ## , was given £ ##,### in money . the ## - year - old claimed £ ##,### in disability allowance . <eod> the ## - year - old was given a six - month prison sentence . ## - year - old pleaded guilty to two counts of fraud . Baseline Attention: @entity 2 mary day , ## , used taxpayers ’ money to go on luxury holidays . claimed £ ##,### of income support and disability allowance despite having savings in the bank . <eod> benefits of taxpayers £ ##,### in disability handouts . Our Method: @entity 2 mary day , ## , used taxpayers ’ money to go on luxury holidays to the @entity 1 indian resort of @entity 1 goa . despite having £ ##,### of her own savings in the bank , she claimed £ ##,### of income support and disability allowance . she pleaded guilty and had given the sentence for three months in prison , but suspended the sentence for ## months . Figure 3: Examples of generated summaries. old”. Because the word decoder is unaware of the history generated sentences, it repeats generating the sequence as the subject all the time. Second, more importantly, is the information incorrectness. The “## - month - old” is not appropriate to describe the heroine, and the “six - month prison sentence” is in fact “three months”. Information incorrectness occurs because, for a decoder, it aims at generating a fluent sentence according to the input representation. However, no favor of consistent with the original input is concerned. The proposed hierarchical decoding algorithm helps to alleviate the two problems. The awareness of all the generated sentences helps prevent from always generating some important information. The favor of bigram overlapping with the original sentences helps generate more correct sentences. For example the model is able to correctly distinguish between the “three-month sentence” and the “##month suspend”. In conclusion, our method is able to identify the most important information in the original document, and the decoding algorithm we propose is able to generate a more discourse-fluent and information-correct abstractive summary. The visualization of the graph-based attention when our method generates the presented example I1 I2 I3 I4 I5 I6 I7 I8 I9 I10 I11 I12 I13 I14 I15 I16 I17 I18 I19<eod> O1 O2 O3 <eod> Figure 4: Attention heatmap when generating the example summary. Ii and Oi indicate the i-th sentence of the input and output, respectively. is shown in Figure 4. It seems that the graph-based attention mechanism is able to find the important sentences in the input document, and the distraction mechanism makes the decoder focus on different sentences during decoding. Gradually the decoder attends to “<eod>” until it stops. 5 Conclusion and Future Work In this paper we tackle the challenging task of abstractive document summarization, which is still less investigated to date. We study the difficulty of the abstractive document summarization task, and address the need of finding salient content from the original document, which is overlooked by previous studies. We propose a novel graph-based attention mechanism in a hierarchical encoderdecoder framework, and propose a hierarchical beam search algorithm to generate multi-sentence summary. Extensive experiments verify the effectiveness of the proposed method. Experimental results on two large-scale datasets demonstrate our method achieves state-of-the-art abstractive document summarization performance. It is also able to achieve competitive results with state-of-the-art neural extractive summarization models. There is lots of future work we can do. An appealing direction is to investigate the neural abstractive method on the multi-document summarization task, which is more challenging and lacks training data. Further endeavor may be needed. Acknowledgments This work was supported by 863 Program of China (2015AA015403), NSFC (61331011), and Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). We thank the anonymous reviewers for helpful comments and Xinjie Zhou, Jianmin Zhang for doing human evaluation. Xiaojun Wan is the corresponding author. 1179 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 . Ziqiang Cao, Furu Wei, Li Dong, Sujian Li, and Ming Zhou. 2015a. Ranking with recursive neural networks and its application to multi-document summarization. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, January 2530, 2015, Austin, Texas, USA.. pages 2153–2159. Ziqiang Cao, Furu Wei, Sujian Li, Wenjie Li, Ming Zhou, and Houfeng Wang. 2015b. Learning summary prior representation for extractive summarization. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). Association for Computational Linguistics, pages 829– 833. https://doi.org/10.3115/v1/P15-2136. Jaime G. Carbonell and Jade Goldstein. 1998. The use of mmr, diversity-based reranking for reordering documents and producing summaries. In SIGIR ’98: Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, August 24-28 1998, Melbourne, Australia. pages 335–336. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, and Hui Jiang. 2016. Distraction-based neural networks for document summarization. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16). pages 2754–2760. Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 484–494. https://doi.org/10.18653/v1/P16-1046. Kit Jackie Chi Cheung and Gerald Penn. 2014. Unsupervised sentence enhancement for automatic summarization. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, pages 775–786. https://doi.org/10.3115/v1/D14-1085. Sumit Chopra, Michael Auli, and M. Alexander Rush. 2016. Abstractive sentence summarization with attentive recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 93–98. https://doi.org/10.18653/v1/N16-1012. Anirban Dasgupta, Ravi Kumar, and Sujith Ravi. 2013. Summarization through submodularity and dispersion. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 1014–1022. http://aclweb.org/anthology/P13-1100. Harold P Edmundson. 1969. New methods in automatic extracting. Journal of the ACM (JACM) 16(2):264–285. G¨unes Erkan and Dragomir R. Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. J. Artif. Intell. Res. (JAIR) 22:457– 479. Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850 . Taher H. Haveliwala. 2002. Topic-sensitive pagerank. In Proceedings of the Eleventh International World Wide Web Conference, WWW 2002, May 7-11, 2002, Honolulu, Hawaii. pages 517–526. Karl Moritz Hermann, Tom´as Kocisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada. pages 1693–1701. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation 9(8):1735–1780. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . Hayato Kobayashi, Masaki Noguchi, and Taichi Yatsuka. 2015. Summarization based on embedding distributions. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1984–1989. https://doi.org/10.18653/v1/D15-1232. Chen Li, Xian Qian, and Yang Liu. 2013. Using supervised bigram-based ilp for extractive summarization. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 1004–1013. http://aclweb.org/anthology/P13-1099. Jiwei Li, Thang Luong, and Dan Jurafsky. 2015. A hierarchical neural autoencoder for paragraphs and documents. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, pages 1106–1115. https://doi.org/10.3115/v1/P151107. 1180 Jiwei Li, Will Monroe, and Dan Jurafsky. 2016. A simple, fast diverse decoding algorithm for neural generation. arXiv preprint arXiv:1611.08562 . Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 workshop. Barcelona, Spain, volume 8. Ryan T. McDonald. 2007. A study of global inference algorithms in multi-document summarization. In Advances in Information Retrieval, 29th European Conference on IR Research, ECIR 2007, Rome, Italy, April 2-5, 2007, Proceedings. pages 557–564. Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. pages 404–411. Lili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, and Zhi Jin. 2016. Sequence to backward and forward sequences: A content-introducing approach to generative short-text conversation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. The COLING 2016 Organizing Committee, pages 3349–3358. http://aclweb.org/anthology/C16-1316. Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA.. pages 3075– 3081. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulcehre, and Bing Xiang. 2016. Abstractive text summarization using sequenceto-sequence rnns and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning. Association for Computational Linguistics, pages 280–290. https://doi.org/10.18653/v1/K16-1028. Hitoshi Nishikawa, Kazuho Arita, Katsumi Tanaka, Tsutomu Hirao, Toshiro Makino, and Yoshihiro Matsuo. 2014. Learning to generate coherent summary with discriminative hidden semi-markov model. In Proceedings of COLING 2014. Dublin City University and Association for Computational Linguistics, pages 1648–1659. Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The pagerank citation ranking: Bringing order to the web. Technical report, Stanford InfoLab. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, pages 1532–1543. https://doi.org/10.3115/v1/D14-1162. Daniele Pighin, Marco Cornolti, Enrique Alfonseca, and Katja Filippova. 2014. Modelling events through memory-based, open-ie patterns for abstractive summarization. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 892–901. https://doi.org/10.3115/v1/P14-1084. M. Alexander Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 379–389. https://doi.org/10.18653/v1/D15-1044. Rui Sun, Yue Zhang, Meishan Zhang, and Donghong Ji. 2015. Event-driven headline generation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, pages 462–472. https://doi.org/10.3115/v1/P15-1045. Kapil Thadani and Kathleen McKeown. 2013. Supervised sentence fusion with single-stage inference. In Proceedings of the Sixth International Joint Conference on Natural Language Processing. Asian Federation of Natural Language Processing, pages 1410– 1418. Xiaojun Wan. 2010. Towards a unified approach to simultaneous single-document and multi-document summarizations. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010). Coling 2010 Organizing Committee, pages 1137–1145. Xiaojun Wan, Jianwu Yang, and Jianguo Xiao. 2007. Manifold-ranking based topic-focused multidocument summarization. In IJCAI 2007, Proceedings of the 20th International Joint Conference on Artificial Intelligence, Hyderabad, India, January 612, 2007. pages 2903–2908. Kristian Woodsend and Mirella Lapata. 2012. Multiple aspect summarization using integer linear programming. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguistics, pages 233–243. Jin-ge Yao, Xiaojun Wan, and Jianguo Xiao. 2017. Recent advances in document summarization. Knowledge and Information Systems . Wenpeng Yin and Yulong Pei. 2015. Optimizing sentence modeling and selection for document summarization. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015, Buenos Aires, Argentina, July 25-31, 2015. pages 1383–1389. 1181
2017
108
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1182–1192 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1109 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1182–1192 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1109 Probabilistic Typology: Deep Generative Models of Vowel Inventories Ryan Cotterell and Jason Eisner Department of Computer Science Johns Hopkins University {ryan.cotterell,eisner}@jhu.edu Abstract Linguistic typology studies the range of structures present in human language. The main goal of the field is to discover which sets of possible phenomena are universal, and which are merely frequent. For example, all languages have vowels, while most—but not all—languages have an [u] sound. In this paper we present the first probabilistic treatment of a basic question in phonological typology: What makes a natural vowel inventory? We introduce a series of deep stochastic point processes, and contrast them with previous computational, simulation-based approaches. We provide a comprehensive suite of experiments on over 200 distinct languages. 1 Introduction Human languages exhibit a wide range of phenomena, within some limits. However, some structures seem to occur or co-occur more frequently than others. Linguistic typology attempts to describe the range of natural variation and seeks to organize and quantify linguistic universals, such as patterns of co-occurrence. Perhaps one of the simplest typological questions comes from phonology: which vowels tend to occur and co-occur within the phoneme inventories of different languages? Drawing inspiration from the linguistic literature, we propose models of the probability distribution from which the attested vowel inventories have been drawn. It is a typological universal that every language contains both vowels and consonants (Velupillai, 2012). But which vowels a language contains is guided by softer constraints, in that certain configurations are more widely attested than others. For instance, in a typical phoneme inventory, there tend to be far fewer vowels than consonants. Likewise, all languages contrast vowels based on height, although which contrast is made is language-dependent (Ladefoged and Maddieson, 1996). Moreover, while over 600 unique vowel Figure 1: The transformed vowel space that is constructed within one of our deep generative models (see §7.1). A deep network nonlinearly maps the blue grid (“formant space”) to the red grid (“metric space”), with individual vowels mapped from blue to red position as shown. Vowel pairs such as [@]– [O] that are brought close together are anti-correlated in the point process. Other pairs such as [y]–[1] are driven apart. For purposes of the visualization, we have transformed the red coordinate system to place red vowels near their blue positions—while preserving distances up to a constant factor (a “Procrustes transformation”). phonemes have been attested cross-linguistically (Moran et al., 2014), certain regions of acoustic space are used much more often than others, e.g., the regions conventionally transcribed as [a], [i], and [u]. Human language also seems to prefer inventories where phonologically distinct vowels are spread out in acoustic space (“dispersion”) so that they can be easily distinguished by a listener. We depict the acoustic space for English in Figure 2. In this work, we regard the proper goal of linguistic typology as the construction of a universal prior distribution from which linguistic systems are drawn. For vowel system typology, we propose three formal probability models based on stochastic point processes. We estimate the parameters of the model on one set of languages and evaluate performance on a held-out set. We explore three questions: (i) How well do the properties of our proposed probability models line up experimentally with linguistic theory? (ii) How well can our models predict held-out vowel systems? (iii) Do our models benefit from a “deep” transformation from formant space to metric space? 1182 iː ɪ e æ ə ʌ ɑː ɒ ɔː ʊ uː Figure 2: The standard vowel table in IPA for the RP accent of English. The x-axis indicates the front-back spectrum and the y-axis indicates the high-low distinction. 2 Vowel Inventories and their Typology Vowel inventories are a simple entry point into the study of linguistic typology. Every spoken language chooses a discrete set of vowels, and the number of vowel phonemes ranges from 3 to 46, with a mean of 8.7 (Gordon, 2016). Nevertheless, the empirical distribution over vowel inventories is remarkably peaked. The majority of languages have 5–7 vowels, and there are only a handful of distinct 4-vowel systems attested despite many possibilities. Reigning linguistic theory (BeckerKristal, 2010) has proposed that vowel inventories are shaped by the principles discussed below. 2.1 Acoustic Phonetics One way to describe the sound of a vowel is through its acoustic energy at different frequencies. A spectrogram (Figure 3) is a visualization of the energy at various frequencies over time. Consider the “peak” frequencies F0 < F1 < F2 < . . . that have a greater energy than their neighboring frequencies. F0 is called the fundamental frequency or pitch. The other qualities of the vowel are largely determined by F1, F2, . . ., which are known as formants (Ladefoged and Johnson, 2014). In many languages, the first two formants F1 and F2 contain enough information to identify a vowel: Figure 3 shows how these differ across three English vowels. We consider each vowel listed in the International Phonetic Alphabet (IPA) to be cross-linguistically characterized by some (F1, F2) pair. 2.2 Dispersion The dispersion criterion (Liljencrants and Lindblom, 1972; Lindblom, 1986) states that the phonemes of a language must be “spread out” so that they are easily discriminated by a listener. A 0 Hz 1000 Hz 2000 Hz 3000 Hz 4000 Hz 5000 Hz /i/ /u/ /ɑ/ Figure 3: Example spectrogram of the three English vowels: [i], [u] and [A]. The x-axis is time and y-axis is frequency. The first two formants F1 and F2 are marked in with colored arrows for each vowel. We used the Praat toolkit to generate the spectrogram and find the formants (Boersma et al., 2002). language seeks phonemes that are sufficiently “distant” from one another to avoid confusion. Distances between phonemes are defined in some latent “metric space.” We use this term rather than “perceptual space” because the confusability of two vowels may reflect not just their perceptual similarity, but also their common distortions by imprecise articulation or background noise.1 2.3 Focalization The dispersion criterion alone does not seem to capture the whole story. Certain vowels are simply more popular cross-linguistically. A commonly accepted explanation is the quantal theory of speech (Stevens, 1972, 1989). The quantal theory states that certain sounds are easier to articulate and to perceive than others. These vowels may be characterized as those where F1 and F2 have frequencies that are close to one another. On the production side, these vowels are easier to pronounce since they allow for greater articulatory imprecision. On the perception side, they are more salient since the two spectral peaks aggregate and act as one, larger peak to a certain degree. In general, languages will prefer these vowels. 2.4 Dispersion-Focalization Theory The dispersion-focalization theory (DFT) combines both of the above notions. A good vowel system now consists of vowels that contrast with each other and are individually desirable (Schwartz et al., 1997). This paper provides the first probabilistic treatment of DFT, and new evaluation metrics for future probabilistic and non-probabilistic treatments of vowel inventory typology. 1We assume in this paper that the metric space is universal—although it would not be unreasonable to suppose that each language’s vowel system has adapted to avoid confusion in the specific communicative environment of its speakers. 1183 3 Point Process Models Given a base set V, a point process is a distribution over its subsets.2 In this paper, we take V to be the set of all IPA symbols corresponding to vowels. Thus a draw from a point process is a vowel inventory V ⊆V, and the point process itself is a distribution over such inventories. We will consider three basic point process models for vowel systems: the Bernoulli Point Process, the Markov Point Process and the Determinantal Point Process. In this section, we review the relevant theory of point processes, highlighting aspects related to §2. 3.1 Bernoulli Point Processes Taking V = {v1, . . . , vN}, a Bernoulli point process (BPP) makes an independent decision about whether to include each vowel in the subset. The probability of a vowel system V ⊆V is thus p(V ) ∝ Y vi∈V φ(vi), (1) where φ is a unary potential function, i.e., φ(vi) ≥ 0. Qualitatively, this means that φ(vi) should be large if the ith vowel is good in the sense of §2.3. Marginal inference in a BPP is computationally trivial. The probability that the inventory V contains vi is φ(vi)/(1 + φ(vi)), independent of the other vowels in V . Since a BPP predicts each vowel independently, it only models focalization. Thus, the model provides an appropriate baseline that will let us measure the importance of the dispersion principle—how far can we get with just focalization? A BPP may still tend to generate well-dispersed sets if it defines φ to be large only on certain vowels in V and these are well-dispersed (e.g., [i], [u], [a]). More precisely, it can define φ so that φ(vi)φ(vj) is small whenever vi, vj are similar.3 But it cannot actively encourage dispersion: 2A point process is a specific kind of stochastic process, which is the technical term for a distribution over functions. Under this view, drawing some subset of V from the point process is regarded as drawing some indicator function on V. 3We point out that such a scheme would break down if we extended our work to cover fine-grained phonetic modeling of the vowel inventory. In that setting, we ask not just whether the inventory includes /i/ but exactly which pronunciation of /i/ it contains. In the limit, φ becomes a function over a continuous vowel space V = R2, turning the BPP into an inhomogeneous spatial Poisson process. A continuous φ function implies that the model places similar probability on similar vowels. Then if most vowel inventories contain some version of /i/, then many of them will contain several closely related variants of /i/ (independently chosen). By contrast, the other methods in this paper do extend nicely to fine-grained phonetic modeling. including vi does not lower the probability of also including vj. 3.2 Markov Point Processes A Markov Point Process (MPP) (Van Lieshout, 2000)—also known as a Boltzmann machine (Ackley et al., 1985; Hinton and Sejnowski, 1986)— generalizes the BPP by adding pairwise interactions between vowels. The probability of a vowel system V ⊆V is now p(V ) ∝ Y vi∈V φ(vi) Y vi,vj∈V ψ(vi, vj), (2) where each φ(vi) ≥0 is, again, a unary potential that scores the quality of the ith vowel, and each ψ(vi, vj) ≥0 is a binary potential that scores the combination of the ith and jth vowels. Roughly speaking, the potential ψ(vi, vj) should be large if the ith and jth vowel often co-occur. Recall that under the principle of dispersion, the vowels that often co-occur are easily distinguishable. Thus, confusable vowel pairs should tend to have potential ψ(vi, vj) < 1. Unlike the BPP, the MPP can capture both focalization and dispersion. In this work, we will consider a fully connected MPP, i.e., there is a potential function for each pair of vowels in V. MPPs closely resemble Ising models (Ising, 1925), but with the difference that Ising models are typically lattice-structured, rather than fully connected. Inference in MPPs. Inference in fully connected MPPs, just as in general Markov Random Fields (MRFs), is intractable (Cooper, 1990) and we must rely on approximation. In this work, we estimate any needed properties of the MPP distribution by (approximately) drawing vowel inventories from it via Gibbs sampling (Geman and Geman, 1984; Robert and Casella, 2005). Gibbs sampling simulates a discrete-time Markov chain whose stationary distribution is the desired MPP distribution. At each time step, for some random vi ∈V, it stochastically decides whether to replace the current inventory V with ¯V , where ¯V is a copy of V with vi added (if vi /∈V ) or removed (if vi ∈V ). The probability of replacement is p( ¯V ) p(V )+p( ¯V ). 3.3 Determinantal Point Processes A determinantal point process (DPP) (Macchi, 1975) provides an elegant alternative to an MPP, and one that is directly suited to modeling both focalization and dispersion. Inference requires only 1184 a few matrix computations and runs tractably in O(|V|3) time, even though the model may encode a rich set of multi-way interactions. We focus on the L-ensemble parameterization of the DPP, due to Borodin and Rains (2005).4 This type of DPP defines the probability of an inventory V ⊆V as p(V ) ∝det LV , (3) where L ∈RN×N (for N = |V|) is a symmetric positive semidefinite matrix, and LV refers to the submatrix of L with only those rows and columns corresponding to those elements in the subset V . Although MAP inference remains NP-hard in DPPs (just as in MPPs), marginal inference becomes tractable. We may compute the normalizing constant in closed form as follows: X V ∈2V det LV = det (L + I) . (4) How does a DPP ensure focalization and dispersion? L is positive semidefinite iff it can be written as E⊤E for some matrix E ∈RN×N. It is possible to express p(V ) in terms of the column vectors of E, which we call e1, . . . , eN: • For inventories of size 2, p({vi, vj}) ∝ (φ(vi)φ(vj) sin θ)2, where φ(vi), φ(vj) represent the quality of vowels vi, vj (as in the BPP) while sin θ ∈[0, 1] represents their dissimilarity. More precisely, φ(vi), φ(vj) are the lengths of vectors ei, ej while θ is the angle between them. Thus, we should choose the columns of E so that focal vowels get long vectors and similar vowels get vectors of similar direction. • Generalizing beyond inventories of size 2, p(V ) is proportional to the square of the volume of the parallelepiped whose sides are given by {ei : vi ∈V }. This volume can be regarded as Q vi∈V φ(vi) times a term that ranges from 1 for an orthogonal set of vowels to 0 for a linearly dependent set of vowels. • The events vi ∈V and vj ∈V are anticorrelated (when not independent). That is, while both vowels may individually have high probabilities (focalization), having either one in the inventory lowers the probability of the other (dispersion). 4Most DPPs are L-ensembles (Kulesza and Taskar, 2012). 4 Dataset At this point it is helpful to introduce the empirical dataset we will model. For each of 223 languages,5 Becker-Kristal (2010) provides the vowel inventory as a set of IPA symbols, listing the first 5 formants for each vowel (or fewer when not available in the original source). Some corpus statistics are shown in Figs. 4 and 5.6 For the present paper, we take V to be the set of all 53 IPA symbols that appear in the corpus. We treat these IPA labels as meaningful, in that we consider two vowels in different languages to be the same vowel in V if (for example) they are both annotated as [O]. We characterize that vowel by its average formant vector across all languages in the corpus that contain the vowel: e.g., (F1, F2, . . .) = (500, 700, . . .) for [O]. In future work, we plan to relax this idealization (see footnote 3), allowing us to investigate natural questions such as whether [u] is pronounced higher (smaller F1) in languages that also contain [o] (to achieve better dispersion). 5 Model Parameterization The BPP, MPP, and DPP models (§3) require us to specify parameters for each vowel in V. In §5.1, we will accomplish this by deriving the parameters for each vowel vi from a possibly high-dimensional embedding of that vowel, e(vi) ∈Rr. In §5.2, e(vi) ∈Rr will in turn be defined as some learned function of f(vi) ∈Rk, where f : V 7→Rk is the function that maps a vowel to a k-vector of its measurable acoustic properties. This approach allows us to determine reasonable parameters even for rare vowels, based on their measurable properties. It will even enable us in 5Becker-Kristal lists some languages multiple times with different measurements. When a language had multiple listings, we selected one randomly for our experiments. 6Caveat: The corpus is a curation of information from various phonetics papers into a common electronic format. No standard procedure was followed across all languages: it was up to individual phoneticists to determine the size of each vowel inventory, the choice of IPA symbols to describe it, and the procedure for measuring the formants. Moreover, it is an idealization to provide a single vector of formants for each vowel type in the language. In real speech, different tokens of the same vowel are pronounced differently, because of coarticulation with the vowel context, allophony, interspeaker variation, and stochastic intraspeaker variation. Even within a token, the formants change during the duration of the vowel. Thus, one might do better to represent a vowel’s pronunciation not by a formant vector, but by a conditional probability distribution over its formant trajectories given its context, or by a parameter vector that characterizes such a conditional distribution. This setting would require richer data than we present here. 1185 future to generalize to vowels that were unseen in the training set, letting us scale to very large or infinite V (footnote 3). 5.1 Deep Point Processes We consider deep versions of all three processes. Deep Bernoulli Point Process. We define φ(vi) = ||e(vi)|| ≥0 (5) Deep Markov Point Process. The MPP employs the same unary potential as the BPP, as well as the binary potential ψ(vi, vj) = exp − 1 T · ||e(vi)−e(vj)||2 < 1 (6) where the learned temperature T > 0 controls the relative strength of the unary and binary potentials. This formula is inspired by Coulomb’s law for describing the repulsion of static electrically charged particles. Just as the repulsive force between two particles approaches ∞as they approach each other, the probability of finding two vowels in the same inventory approaches exp −∞= 0 as they approach each other. The formula is also reminiscent of Shepard (1987)’s “universal law of generalization,” which says here that the probability of responding to vi as if it were vj should fall off exponentially with their distance in some “psychological space” (here, embedding space). Deep Determinantal Point Process. For the DPP, we simply define the vector ei to be e(vi), and proceed as before. Summary. In the deep BPP, the probability of a set of vowels is proportional to the product of the lengths of their embedding vectors. The deep MPP modifies this by multiplying in pairwise repulsion terms in (0, 1) that increase as the vectors’ endpoints move apart in Euclidean space (or as T →∞). The deep DPP instead modifies it by multiplying in a single setwise repulsion term in (0, 1) that increases as the embedding vectors become more mutually orthogonal. In the limit, then, the MPP and DPP both approach the BPP. 5.2 Embeddings Throughout this work, we simply have f extract the first k = 2 formants, since our dataset does not provide higher formants for all languages.7 For 7In lieu of higher formants, we could have extended the vector f(vi) to encode the binary distinctive features of the IPA vowel vi: round, tense, long, nasal, creaky, etc. example, we have f([O]) = (500, 700). We now describe three possible methods for mapping f(vi) to an embedding e(vi). Each of these maps has learnable parameters. Neural Embedding. We first consider directly embedding each vowel vi into a vector space Rr. We achieve this through a feed-forward neural net e(vi) = W1 tanh (W0f(vi) + b0) + b1, (7) Equation (7) gives an architecture with 1 layer of nonlinearity; in general we consider stacking d ≥0 layers. Here W0 ∈Rr×k, W1 ∈Rr×r, . . . Wd ∈ Rr×r are weight matrices, b0, . . . bd ∈Rr are bias vectors, and tanh could be replaced by any pointwise nonlinearity. We treat both the depth d and the embedding size r as hyperparameters, and select the optimal values on a development set. Interpretable Neural Embedding. We are interested in the special case of neural embeddings when r = k since then (for any d) the mapping f(vi) 7→e(vi) is a diffeomorphism:8 a smooth invertible function of Rk. An example of such a diffeomorphism is shown in Figure 1. There is a long history in cognitive psychology of mapping stimuli into some psychological space. The distances in this psychological space may be predictive of generalization (Shepard, 1987) or of perception. Due to the anatomy of the ear, the mapping of vowels from acoustic space to perceptual space is often presumed to be nonlinear (Rosner and Pickering, 1994; Nearey and Kiefte, 2003), and there are many perceptually-oriented phonetic scales, e.g., Bark and Mel, that carry out such nonlinear transformations while preserving the dimensionality k, as we do here. As discussed in §2.2, vowel system typology is similarly believed to be influenced by distances between the vowels in a latent metric space. We are interested in whether a constrained k-dimensional model of these distances can do well in our experiments. Prototype-Based Embedding. Unfortunately, our interpretable neural embedding is unfortunately incompatible with the DPP. The DPP assigns probability 0 to any vowel inventory V whose e vectors are linearly dependent. If the vectors are in Rk, then this means that p(V ) = 0 whenever |V | > k. In our setting, this would limit vowel inventories to size 2. 8Provided that our nonlinearity in (7) is a differentiable invertible function like tanh rather than relu. 1186 Our solution to this problem is to still construct our interpretable metric space Rk, but then map that nonlinearly to Rr for some large r. This latter map is constrained. Specifically, we choose “prototype” points µ1, . . . , µr ∈Rk. These prototype points are parameters of the model: their coordinates are learned and do not necessarily correspond to any actual vowel. We then construct e(vi) ∈Rr as a “response vector” of similarities of our vowel vi to these prototypes. Crucially, the responses depend on distances measured in the interpretable metric space Rk. We use a Gaussian-density response function, where x(vi) denotes the representation of our vowel vi in the interpretable space: e(vi)ℓ= wℓp(x(vi); µℓ, σ2I) (8) = wℓ(2πσ2)−( k 2) exp −||x −µℓ||2 2σ2  . for ℓ= 1, 2, . . . , r. We additionally impose the constraints that each wℓ≥0 and Pr ℓ=1 wℓ= 1. Notice that the sum Pr ℓ=1 e(vi) may be viewed as the density at x(vi) under a Gaussian mixture model. We use this fact to construct a prototypebased MPP as well: we redefine φ(vi) to equal this positive density, while still defining ψ via equation (6). The idea is that dispersion is measured in the interpretable space Rk, and focalization is defined by certain “good” regions in that space that are centered at the r prototypes. 6 Evaluation Metrics Fundamentally, we are interested in whether our model has abstracted the core principles of what makes a good vowel system. Our choice of a probabilistic model provides a natural test: how surprised is our model by held-out languages? In other words, how likely does our model think unobserved, but attested vowel systems are? While this is a natural evaluation paradigm in NLP, it has not—to the best of our knowledge—been applied to a quantitative investigation of linguistic typology. As a second evaluation, we introduce a vowel system cloze task that could also be used to evaluate non-probabilistic models. This task is defined by analogy to the traditional semantic cloze task (Taylor, 1953), where the reader is asked to fill in a missing word in the sentence from the context. In our vowel system cloze task, we present a learner with a subset of the vowels in a held-out vowel system and ask them to predict the remaining vowels. Consider, as a concrete example, the general American English vowel system (excluding long vowels) {[i], [I], [u], [U], [E], [æ], [O], [A], [@]}. One potential cloze task would be to predict {[i], [u]} given {[I], [U], [E], [æ], [O], [A], [@]} and the fact that two vowels are missing from the inventory. Within the cloze task, we report accuracy, i.e., did we guess the missing vowel right? We consider three versions of the cloze tasks. First, we predict one missing vowel in a setting where exactly one vowel was deleted. Second, we predict up to one missing vowel where a vowel may have been deleted. Third, we predict up to two missing vowels, where one or two vowels may be deleted. 7 Experiments We evaluate our models using 10-fold crossvalidation over the 223 languages. We report the mean performance over the 10 folds. The performance on each fold (“test”) was obtained by training many models on 8 of the other 9 folds (“train”), selecting the model that obtained the best task-specific performance on the remaining fold (“development”), and assessing it on the test fold. Minimization of the parameters is performed with the L-BFGS algorithm (Liu and Nocedal, 1989). As a preprocessing step, the first two formants values F1 and F2 are centered around zero and scaled down by a factor of 1000 since the formant values themselves may be quite large. Specifically, we use the development fold to select among the following combinations of hyperparameters. For neural embeddings, we tried r ∈{2, 10, 50, 100, 150, 200}. For prototype embeddings, we took the number of components r ∈{20, 30, 40, 50}. We tried network depths d ∈{0, 1, 2, 3}. We sweep the coefficient for an L2 regularizer on the neural network parameters. 7.1 Results and Discussion Figure 1 visualizes the diffeomorphism from formant space to metric space for one of our DPP models (depth d = 3 with r = 20 prototypes). Similar figures can be generated for all of the interpretable models. We report results for cross-entropy and the cloze evaluation in Table 1.9 Under both metrics, we see that the DPP is slightly better than the MPP; both are better than the BPP. This ranking holds for 9Computing cross-entropy exactly is intractable with the MPP, so we resort to an unbiased importance sampling scheme where we draw samples from the BPP and reweight according to the MPP (Liu et al., 2015). 1187 BPP uBPP uMPP uDPP iBPP iMPP iDPP pBPP pMPP pDPP x-ent 8.24 8.28 8.08 8.00 13.01 11.50  12.83 10.95 10.29 cloze-1 69.55% 69.55% 72.05% 73.18% 64.13% 67.02%  65.13% 68.18% 68.18% cloze-01 60.00% 60.00% 61.01% 62.27% 61.78% 61.04%  61.02% 63.04% 63.63% cloze-012 53.18% 53.18% 57.92% 58.18% 39.04% 43.02%  40.56% 45.01% 45.46% Table 1: Cross-entropy in nats (lower is better) and cloze prediction accuracy (higher is better). “BPP” is a simple BPP with one parameter for each of the 53 vowels in V. This model does artificially well by modeling an “accidental” feature of our data: it is able to learn not only which vowels are popular among languages, but also which IPA symbols are popular or conventional among the descriptive phoneticists who created our dataset (see footnote 6), something that would become irrelevant if we upgraded our task to predict actual formant vectors rather than IPA symbols (see footnote 3). Our point processes, by contrast, are appropriately allowed to consider a vowel only through its formant vector. The “u-” versions of the models use the uninterpretable neural embedding of the formant vector into Rr: by taking r to be large, they are still able to learn special treatment for each vowel in V (which is why uBPP performs identically to BPP, before being beaten by uMPP and uDPP). The “i-” versions limit themselves to an interpretable neural embedding into Rk, giving a more realistic description that does not perform as well. The “p-”versions lift that Rk embedding into Rr by measuring similarities to r prototypes; they thereby improve on the corresponding i- versions. For each result shown, the depth d of our neural network was tuned on a development set (typically d = 2). r was also tuned when applicable (typically r > 100 dimensions for the u- models and r ≈30 prototypes for the p- models). each of the 3 embedding schemes. The embedding schemes themselves are compared in the caption. Within each embedding scheme, the BPP performs several points worse on the cloze tasks, confirming that dispersion is needed to model vowel inventories well. Still, the BPP’s respectable performance shows that much of the structure can be capture by focalization. As §3 noted, the BPP may generate well-dispersed sets, as the common vowels tend to be dispersed already (see Figure 4). In this capacity, however, the BPP is not explanatory as it cannot actually tell us why these vowels should be frequent. We mention that depth in the neural network is helpful, with deeper embedding networks performing slightly better than depth d = 0. Finally, we identified each model’s favorite complete vowel system of size n (Table 2). For the BPP, this is simply the n most probable vowels. Decoding the DPP and MPP is NP-hard, but we found the best system by brute force (for small n). The dispersion in these models predicts different systems than the BPP. 8 Discussion: Probabilistic Typology Typology as Density Estimation? Our goal is to define a universal distribution over all possible vowel inventories. Is this appropriate? We regard this as a natural approach to typology, because it directly describes which kinds of linguistic systems are more or less common. Traditional implicational universals (“all languages with vi have vj”) are softened, in our approach, into conditional probabilities such as “p(vj ∈V | vi ∈V ) ≈0.9.” Here the 0.9 is not merely an empirical ratio, but a smoothed probability derived from the complete estimated distribution. It is meant to make predictions about unseen languages. Whether human language learners exploit any properties of this distribution10 is a separate question that goes beyond typology. Jakobson (1941) did find that children acquired phoneme inventories in an order that reflected principles similar to dispersion (“maximum contrast”) and focalization. At any rate, we estimate the distribution given some set of attested systems that are assumed to have been drawn IID from it. One might object that this IID assumption ignores evolutionary relationships among the attested systems, causing our estimated distribution to favor systems that are coincidentally frequent among current human languages, rather than being natural in some timeless sense. We reply that our approach is then appropriate when the goal of typology is to estimate the distribution of actual human languages—a distribution that can be utilized in principle (and also in practice, as we show) to predict properties of actual languages from outside the training set. A different possible goal of typology is a theory of natural human languages. This goal would require a more complex approach. One should not imagine that natural languages are drawn in a vacuum from some single, stationary distribution. Rather, each language is drawn conditionally on its parent language. Thus, one should estimate a stochastic model of the evolution of linguistic systems through time, and identify “naturalness” with 10This could happen because learners have evolved to expect the languages (the Baldwin effect), or because the languages have evolved to be easily learned (universal grammar). 1188 BPP MPP DPP changes from n −1 changes from n −1 changes from n −1 n MAP inventory additions deletions MAP inventory additions deletions MAP inventory additions deletions 1 i i @ @ @ @ 2 i, u u i, u i, u @ i, u i, u @ 3 i, u, a a i, u, a a i, u, a a 4 i, u, a, o o i, u, a, e e i, u, a, o o 5 i, u, a, o, e e i, u, a, e, @ @ i, u, a, o, @ o Table 2: Highest-probability inventory of each size according to our three models (prototype-based embeddings and d = 3). The MAP configuration is computed by brute-force enumeration for small n. the directions in which this system tends to evolve. Energy Minimization Approaches. The traditional energy-based approach (Liljencrants and Lindblom, 1972) to vowel simulation minimizes the following objective (written in our notation): E(m) = X 1≤i<j≤m 1 ||e(vi) −e(vj)||2 , (9) where the vectors e(vi) ∈Rr are not spit out of a deep network, as in our case, but rather directly optimized. Liljencrants and Lindblom (1972) propose a coordinate descent algorithm to optimize E(m). While this is not in itself a probabilistic model, they generate diverse vowel systems through random restarts that find different local optima (a kind of deterministic evolutionary mechanism). We note that equation (9) assumes that the number of vowels m is given, and only encodes a notion of dispersion. Roark (2001) subsequently extended equation (9) to include the notion of focalization. Vowel Inventory Size. A fatal flaw of the traditional energy minimization paradigm is that it has no clear way to compare vowel inventories of different sizes. The problem is quite crippling since, in general, inventories with fewer vowels will have lower energy. This does not match reality—the empirical distribution over inventory sizes (shown in Figure 5) shows that the mode is actually 5 and small inventories are uncommon: no 1-vowel inventory is attested and only one 2-vowel inventory is known. A probabilistic model over all vowel systems must implicitly model the size of the system. Indeed, our models pit all potential inventories against each other, bestowing the extra burden to match the empirical distribution over size. Frequency of Inventories. Another problem is the inability to model frequency. While for inventories of a modest size (3-5 vowels) there are very few unique attested systems, there is a plethora of attested larger vowel systems. The energy minimization paradigm has no principled manner to tell the scientist how likely a novel system may be. Appealing again to the empirical distribution over attested vowel systems, we consider the relative diversity of systems of each size. We graph this in Figure 5. Consider all vowel systems of size 7. There are |V| 7  potential inventories, yet the empirical distribution is remarkably peaked. Our probabilistic models have the advantage in this context as well, as they naturally quantify the likelihood of an individual inventory. Typology is a Small-Data Problem. In contrast to many common problems in applied NLP, e.g., part-of-speech tagging, parsing and machine translation, the modeling of linguistic typology is fundamentally a “small-data” problem. Out of the 7105 languages on earth, we only have linguistic annotation for 2600 of them (Comrie et al., 2013). Moreover, we only have phonetic and phonological annotation for a much smaller set of languages— between 300-500 (Maddieson, 2013). Given the paucity of data, overfitting on only those attested languages is a dangerous possibility—just because a certain inventory has never been attested, it is probably wrong to conclude that it is impossible— or even improbable—on that basis alone. By analogy to language modeling, almost all sentences observed in practice are novel with respect to the training data, but we still must employ a principled manner to discriminate high-probability sentences (which are syntactically and semantically coherent) from low-probability ones. Probabilistic modeling provides a natural paradigm for this sort of investigation—machine learning has developed well-understood smoothing techniques, e.g., regularization with tuning on a held-out dev set, to avoid overfitting in a small-data scenario. Related Work in NLP. Various point processes have been previously applied to potpourri of tasks 1189 i u a o e ɔ ɛ ɪ y ʊ ɑ ø æ ə ɨ œ ʏɯʌ ɤ ɒ ɵ ʉ ɜ ɐ e̞ ö 0 10 20 30 40 50 60 70 80 90 Figure 4: Percentage of the vowel inventories (y-axis) in the Becker-Kristal corpus (Becker-Kristal, 2010) that have a given vowel (shown in IPA along the x-axis). in NLP. Determinantal point processes have found a home in the literature in tasks that require diversity. E.g., DPPs have achieved state-of-the-art results on multi-document document summarization (Kulesza and Taskar, 2011), news article selection (Affandi et al., 2012) recommender systems (Gartrell et al., 2017), joint clustering of verbal lexical semantic properties (Reichart and Korhonen, 2013), inter alia. Poisson point processes have also been applied to NLP problems: Yee et al. (2015) model the emerging topic on social media using a homogeneous point process and Lukasik et al. (2015) apply a log-Gaussian point process, a variant of the Poisson point process, to rumor detection in Twitter. We are unaware of previous attempts to probabilistically model vowel inventory typology. Future Work. This work lends itself to several technical extensions. One could expand the function f to more completely characterize each vowel’s acoustic properties, perceptual properties, or distinctive features (footnote 7). One could generalize our point process models to sample finite subsets from the continuous space of vowels (footnote 3). One could consider augmenting the MPP with a new factor that explicitly controls the size of the vowel inventory. Richer families of point processes might also be worth exploring. For example, perhaps the vowel inventory is generated by some temporal mechanism with latent intermediate steps, such as sequential selection of the vowels or evolutionary drift of the inventory. Another possibility is that vowel systems tend to reuse distinctive features or even follow factorial designs, so that an inventory with creaky front vowels also tends to have creaky back vowels. 3 4 5 6 7 8 9 10 11 12 13 14 0 20 40 60 80 100 120 140 Figure 5: Histogram of the sizes of different vowel inventories in the corpus. The x-axis is the size of the vowel inventory and the y-axis is the number of inventories with that size. 9 Conclusions We have presented a series of point process models for the modeling of vowel system inventory typology with the goal of a mathematical grounding for research in phonological typology. All models were additionally given a deep parameterization to learn representations similar to perceptual space in cognitive science. Also, we motivated our preference for probabilistic modeling in linguistic typology over previously proposed computational approaches and argued it is a more natural research paradigm. Additionally, we have introduced several novel evaluation metrics for research in vowelsystem typology, which we hope will spark further interest in the area. Their performance was empirically validated on the Becker-Kristal corpus, which includes data from over 200 languages. Acknowledgments The first author was funded by an NDSEG graduate fellowship, and the second author by NSF grant IIS1423276. We would like to thank Tim Vieira and Huda Khayrallah for helpful initial feedback. References David H. Ackley, Geoffrey E. Hinton, and Terrence J. Sejnowski. 1985. A learning algorithm for Boltzmann machines. Cognitive Science 9(1):147–169. Raja Hafiz Affandi, Alex Kulesza, and Emily B. Fox. 2012. Markov determinantal point processes. In Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence. pages 26–35. Roy Becker-Kristal. 2010. Acoustic Typology of Vowel Inventories and Dispersion Theory: Insights from a Large Cross-Linguistic Corpus. Ph.D. thesis, UCLA. 1190 Paulus Petrus Gerardus Boersma et al. 2002. Praat, a system for doing phonetics by computer. Glot International 5. Alexei Borodin and Eric M. Rains. 2005. EynardMehta theorem, Schur process, and their Pfaffian analogs. Journal of Statistical Physics 121(34):291–317. Bernard Comrie, Matthew S. Dryer, David Gil, and Martin Haspelmath. 2013. Introduction. In Matthew S. Dryer and Martin Haspelmath, editors, The World Atlas of Language Structures Online, Max Planck Institute for Evolutionary Anthropology, Leipzig. http://wals.info/chapter/s1. Gregory F. Cooper. 1990. The computational complexity of probabilistic inference using Bayesian belief networks. Artificial Intelligence 42(2-3):393–405. Mike Gartrell, Ulrich Paquet, and Noam Koenigstein. 2017. Low-rank factorization of determinantal point processes pages 1912–1918. Stuart Geman and Donald Geman. 1984. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence (6):721–741. Matthew K. Gordon. 2016. Phonological Typology. Oxford. Geoffrey E. Hinton and Terry J. Sejnowski. 1986. Learning and relearning in Boltzmann machines. In David E. Rumelhart and James L. McClelland, editors, Parallel Distributed Processing, MIT Press, volume 2, chapter 7, pages 282–317. Ernst Ising. 1925. Beitrag zur theorie des ferromagnetismus. Zeitschrift f¨ur Physik A Hadrons and Nuclei 31(1):253–258. Roman Jakobson. 1941. Kindersprache, Aphasie und allgemeine Lautgesetze. Suhrkamp Frankfurt aM. Alex Kulesza and Ben Taskar. 2011. Learning determinantal point processes. In Proceedings of the Twenty-Seventh Conference on Uncertainty in Artificial Intelligence. pages 419–427. Alex Kulesza and Ben Taskar. 2012. Determinantal point processes for machine learning. Foundations and Trends R⃝in Machine Learning 5(2–3):123–286. Peter Ladefoged and Keith Johnson. 2014. A Course in Phonetics. Centage. Peter Ladefoged and Ian Maddieson. 1996. The Sounds of the World’s Languages. Oxford. Johan Liljencrants and Bj¨orn Lindblom. 1972. Numerical simulation of vowel quality systems: The role of perceptual contrast. Language pages 839–862. Bj¨orn Lindblom. 1986. Phonetic universals in vowel systems. Experimental Phonology pages 13–44. Dong C. Liu and Jorge Nocedal. 1989. On the limited memory BFGS method for large scale optimization. Mathematical Programming 45(1-3):503–528. Qiang Liu, Jian Peng, Alexander T. Ihler, and John W. Fisher III. 2015. Estimating the partition function by discriminance sampling. In Proceedings of the Thirty-First Conference on Uncertainty in Artificial Intelligence. pages 514–522. Michal Lukasik, Trevor Cohn, and Kalina Bontcheva. 2015. Point process modelling of rumour dynamics in social media. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). Association for Computational Linguistics, Beijing, China, pages 518–523. Odile Macchi. 1975. The coincidence approach to stochastic point processes. Advances in Applied Probability pages 83–122. Ian Maddieson. 2013. Vowel quality inventories. In Matthew S. Dryer and Martin Haspelmath, editors, The World Atlas of Language Structures Online, Max Planck Institute for Evolutionary Anthropology, Leipzig. http://wals.info/chapter/2. Steven Moran, Daniel McCloy, and Richard Wright. 2014. PHOIBLE online. Leipzig: Max Planck Institute for Evolutionary Anthropology . Terrance M. Nearey and Michael Kiefte. 2003. Comparison of several proposed perceptual representations of vowel spectra. Proceedings of the XVth International Congress of Phonetic Sciences 1:1005– 1008. Roi Reichart and Anna Korhonen. 2013. Improved lexical acquisition through DPP-based verb clustering. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Sofia, Bulgaria, pages 862–872. Brian Roark. 2001. Explaining vowel inventory tendencies via simulation: Finding a role for quantal locations and formant normalization. In North East Linguistic Society. volume 31, pages 419–434. Christian P. Robert and George Casella. 2005. Monte Carlo Statistical Methods. Springer-Verlag New York, Inc., Secaucus, NJ, USA. Burton S. Rosner and John B. Pickering. 1994. Vowel Perception and Production. Oxford University Press. Jean-Luc Schwartz, Louis-Jean Bo¨e, Nathalie Vall´ee, and Christian Abry. 1997. The dispersionfocalization theory of vowel systems. Journal of Phonetics 25(3):255–286. Roger N. Shepard. 1987. Toward a universal law of generalization for psychological science. Science 237(4820):1317–1323. 1191 Kenneth N. Stevens. 1972. The quantal nature of speech: Evidence from articulatory-acoustic data. In E. E. David and P. B. Denes, editors, Human Communication: A Unified View, McGraw-Hill, pages 51–56. Kenneth N Stevens. 1989. On the quantal nature of speech. Journal of Phonetics 17:3–45. Wilson L. Taylor. 1953. Cloze procedure: a new tool for measuring readability. Journalism and Mass Communication Quarterly 30(4):415. M. N. M. Van Lieshout. 2000. Markov Point Processes and Their Applications. Imperial College Press, London. Viveka Velupillai. 2012. An Introduction to Linguistic Typology. John Benjamins Publishing Company. Connie Yee, Nathan Keane, and Liang Zhou. 2015. Modeling and characterizing social media topics using the gamma distribution. In EVENTS. pages 117– 122. 1192
2017
109
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 112–122 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1011 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 112–122 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1011 Discourse Mode Identification in Essays Wei Song†, Dong Wang‡, Ruiji Fu‡, Lizhen Liu†, Ting Liu§, Guoping Hu‡ †Information Engineering, Capital Normal University, Beijing, China ‡iFLYTEK Research, Beijing, China §Harbin Institute of Technology, Harbin, China {wsong, lzliu}@cnu.edu.cn, {dongwang4,rjfu, gphu}@iflytek.com, [email protected] Abstract Discourse modes play an important role in writing composition and evaluation. This paper presents a study on the manual and automatic identification of narration, exposition, description, argument and emotion expressing sentences in narrative essays. We annotate a corpus to study the characteristics of discourse modes and describe a neural sequence labeling model for identification. Evaluation results show that discourse modes can be identified automatically with an average F1-score of 0.7. We further demonstrate that discourse modes can be used as features that improve automatic essay scoring (AES). The impacts of discourse modes for AES are also discussed. 1 Introduction Discourse modes, also known as rhetorical modes, describe the purpose and conventions of the main kinds of language based communication.Most common discourse modes include narration, description, exposition and argument. A typical text would make use of all the modes, although in a given one there will often be a main mode. Despite their importance in writing composition and assessment (Braddock et al., 1963), there is relatively little work on analyzing discourse modes based on computational models. We aim to contribute for automatic discourse mode identification and its application on writing assessment. The use of discourse modes is important in writing composition, because they relate to several aspects that would influence the quality of a text. First, discourse modes reflect the organization of a text. Natural language texts consist of sentences which form a unified whole and make up the discourse (Clark et al., 2013). Recognizing the structure of text organization is a key part for discourse analysis. Meurer (2002) points that discourse modes stand for unity as they constitute general patterns of language organization strategically used by the writer. Smith (2003) also proposes to study discourse passages from a linguistic view of point through discourse modes. The organization of a text can be realized by segmenting text into passages according to the set of discourse modes that are used to indicate the functional relationship between the several parts of the text. For example, the writer can present major events through narration, provide details with description and establish ideas with argument. The combination and interaction of various discourse modes make an organized unified text. Second, discourse modes have rhetorical significance. Discourse modes are closely related to rhetoric (Connors, 1981; Brooks and Warren, 1958), which offers a principle for learning how to express material in the best way. Discourse modes have different preferences on expressive styles. Narration mainly controls story progression by introducing and connecting events; exposition is to instruct or explain so that the language should be precise and informative; argument is used to convince or persuade through logical and inspiring statements; description attempts to bring detailed observations of people and scenery, which is related to the writing of figurative language; the way to express emotions may relate to the use of rhetorical devices and poetic language. Discourse modes reflect the variety of expressive styles. The flexible use of various discourse modes should be important evidence of language proficiency. According to the above thought, we propose the discourse mode identification task. In particular, we make the following contributions: 112 • We build a corpus of narrative essays written by Chinese students in native language. Sentence level discourse modes are annotated with acceptable inter-annotator agreement. Corpus analysis reveals the characteristics of discourse modes in several aspects, including discourse mode distribution, co-occurrence and transition patterns. • We describe a multi-label neural sequence labeling approach for discourse mode identification so that the co-occurrence and transition preferences can be captured. Experimental results show that discourse modes can be identified with an average F1-score of 0.7, indicating that automatic discourse mode identification is feasible. • We demonstrate the effectiveness of taking discourse modes into account for automatic essay scoring. A higher ratio of description and emotion expressing can indicate essay quality to a certain extent. Discourse modes can be potentially used as features for other NLP applications. 2 Related Work 2.1 Discourse Analysis Discourse analysis is an important subfield of natural language processing (Webber et al., 2011). Discourse is expected to be both cohesive and coherent. Many principles are proposed for discourse analysis, such as coherence relations (Hobbs, 1979; Mann and Thompson, 1988), the centering theory for local coherence (Grosz et al., 1995) and topic-based text segmentation (Hearst, 1997). In some domains, discourse can be segmented according to specific discourse elements (Hutchins, 1977; Teufel and Moens, 2002; Burstein et al., 2003; Clerehan and Buchbinder, 2006; Song et al., 2015). This paper focuses on discourse modes influenced by Smith (2003). From the linguistic view of point, discourse modes are supposed to have different distributions of situation entity types such as event, state and generic (Smith, 2003; Mavridou et al., 2015). Therefore, there is work on automatically labeling clause level situation entity types (Palmer et al., 2007; Friedrich et al., 2016). Actually, situation entity type identification is also a challenging problem. It is even harder for processing Chinese language, since Chinese doesn’t have grammatical tense (Xue and Zhang, 2014) and sentence components are often omitted. This increases the difficulties for situation entity type based discourse mode identification. In this paper, we investigate an end-to-end approach to directly model discourse modes without the necessity of identifying situation entity types first. 2.2 Automatic Writing Assessment Automatic writing assessment is an important application of natural language processing. The task aims to let computers have the ability to appreciate and criticize writing. It would be hugely beneficial for applications like automatic essay scoring (AES) and content recommendation. AES is the task of building a computer-aided scoring system, in order to reduce the involvement of human raters. Traditional approaches are based on supervised learning with designed feature templates (Larkey, 1998; Burstein, 2003; Attali and Burstein, 2006; Chen and He, 2013; Phandi et al., 2015; Cummins et al., 2016). Recently, automatic feature learning based on neural networks starts to draw attentions (Alikaniotis et al., 2016; Dong and Zhang, 2016; Taghipour and Ng, 2016). Writing assessment involves highly technical aspects of language and discourse. In addition to give a score, it would be better to provide explainable feedbacks to learners at the same time. Some work has studied several aspects such as spelling errors (Brill and Moore, 2000), grammar errors (Rozovskaya and Roth, 2010), coherence (Barzilay and Lapata, 2008), organization of argumentative essays (Persing et al., 2010) and the use of figurative language (Louis and Nenkova, 2013). This paper extends this line of work by taking discourse modes into account. 2.3 Neural Sequence Modeling A main challenge of discourse analysis is hard to collect large scale data due to its complexity, which may lead to data sparseness problem. Recently, neural networks become popular for natural language processing (Bengio et al., 2003; Collobert et al., 2011). One of the advantages is the ability of automatic representation learning. Representing words or relations with continuous vectors (Mikolov et al., 2013; Ji and Eisenstein, 2014) embeds semantics in the same space, which benefits alleviating the data sparseness problem 113 and enables end-to-end and multi-task learning. Recurrent neural networks (RNNs) (Graves, 2012) and the variants like Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) and Gated Recurrent (GRU) (Cho et al., 2014) neural networks show good performance for capturing long distance dependencies on tasks like Named Entity Recognition (NER) (Chiu and Nichols, 2016; Ma and Hovy, 2016), dependency parsing (Dyer et al., 2015) and semantic composition of documents (Tang et al., 2015). This work describes a hierarchical neural architecture with multiple label outputs for modeling the discourse mode sequence of sentences. 3 Discourse Mode Annotation We are interested in the use of discourse modes in writing composition. This section describes the discourse modes we are going to study, an annotated corpus of student essays and what we learn from corpus analysis. 3.1 Discourse Modes Discourse modes have several taxonomies in the literature. Four basic discourse modes are narration, description, exposition and argument in English composition and rhetoric (Bain, 1890). Smith (2003) proposes five modes for studying discourse passages: narrative, description, report, information and argument. In Chinese composition, discourse modes are categorized into narration, description, exposition, argument and emotion expressing (Zhu, 1983). These taxonomies are similar. Their elements can mostly find corresponding ones in other taxonomies literally or conceptually, e.g., exposition mode has similar functions to information mode. Emotion expressing that is to express the writer’s emotions is relatively special. It can be realized by expressing directly or through lyrical writing with beautiful and poetic language. It is also related to appeal to emotion, which is a method for argumentation by the manipulation of the recipient’s emotions in classical rhetoric (Aristotle and Kennedy, 2006). Proper emotion expressing can touch the hearts of the readers and improve the expressiveness of writing. Therefore, considering it as an independent mode is also reasonable. We cope with essays written in Chinese in this work so that we follow the Chinese convention with five discourse modes. Emotion expressing is added on the basis of four recognized discourse modes and Smith’s report mode is viewed as a subtype of description mode: dialogue description. In summary, we study the following discourse modes: • Narration introduces an event or series of events into the universe of discourse. The events are temporally related according to narrative time. E.g., Last year, we drove to San Francisco along the State Route 1 (SR 1). • Exposition has a function to explain or instruct. It provides background information in narrative context. The information presented should be general and (expected to be) well accepted truth. E.g., SR 1 is a major north-south state highway that runs along most of the Pacific coastline of the U.S. • Description re-creates, invents, or vividly show what things are like according to the five senses so that the reader can picture that which is being described. E.g., Along SR 1 are stunning rugged coastline, coastal forests and cliffs, beautiful little towns and some of the West coast’s most amazing nature. • Argument makes a point of view and proves its validity towards a topic in order to convince or persuade the reader. E.g., Point Arena Lighthouse is a must see along SR 1, in my opinion. • Emotion expressing1 presents the writer’s emotions, usually in a subjective, personal and lyrical way, to involve the reader to experience the same situations and to be touched. E.g., I really love the ocean, the coastline and all the amazing scenery along the route. When could I come back again? The distinction between discourse modes is expected to be clarified conceptually by considering their different communication purposes. However, there would still be specific ambiguous and vague cases. We will describe the data annotation and corpus analysis in the following parts. 1In some cases, we use emotion for short. 114 INITIAL FINAL P R F P R F Nar 0.90 0.88 0.89 0.96 0.84 0.90 Exp 0.79 0.73 0.76 0.89 0.76 0.81 Des 0.84 0.74 0.79 0.87 0.65 0.74 Emo 0.75 0.68 0.71 0.79 0.73 0.76 Arg 0.35 0.28 0.31 0.76 0.61 0.68 Avg. 0.73 0.66 0.69 0.87 0.71 0.78 κ 0.55 0.72 Table 1: Inter-annotator agreement between two annotators on the dominant discourse mode. Initial: The result of the first round annotation; Final: The result of the final annotation; κ: Agreement measured with Cohen’s Kappa. 3.2 Data Annotation Discourse modes are almost never found in a pure form but are embedded one within another to help the writer achieve the purpose, but the emphasis varies in different types of writing. We focus on narrative essays. A good narrative composition must properly manipulate multiple discourse modes to make it vivid and impressive. The corpus has 415 narrative essays written by high school students in their native Chinese language.The average number of sentences is 32 and the average length is 670 words. We invited two high school teachers to annotate discourse modes at sentence level, expecting their background help for annotation. A detail manual was discussed before annotation. We notice that discourse modes can mix in the same sentence. Therefore, the annotation standard allows that one sentence can have multiple modes. But we require that every sentence should have a dominant mode. The annotators should try to think in the writer’s perspective and guess the writer’s main purpose of writing the sentence in order to decide the dominant mode. Among the discourse modes, description can be applied in various situations. We focus on the following description types: portrait, appearance, action, dialogue, psychological, environment and detail description. If a sentence has any type of description, it would be assigned a description label. 3.3 Corpus Analysis We conducted corpus analysis on the annotated data to gain observations on several aspects. Inter-Annotator Agreement: 50 essays were independently annotated by two annotators. We evaluate the inter-annotator agreement on the domNarration 57.6% Exposition 2.0% Description 23.2% Argument 1.0% Emotion 16.2% Figure 1: The distribution of dominant modes. inant mode. The two annotators’ annotations are used as the golden answer and prediction respectively. We compute the precision, recall and F1score for each discourse mode separately to measure the inter-annotator agreement. Precision and recall are symmetric for the two annotators. The result of the first round annotation is shown in the INITIAL columns of Table 1. The agreement on argument mode is low, while the agreement on other modes is acceptable. The average F1-score is 0.69. The Cohen’s Kappa (Cohen et al., 1960) is 0.55 over all judgements on the dominant mode. The main disagreement on argument lies in the confusion with emotion expressing. Consider the following sentence: Father’s love is the fire that lights the lamp of hope. One annotator thought that it is expressed in an emotional and lyrical way so that the discourse mode should be emotion expressing. The other one thought that it (implicitly) gives a point and should be an argument. Many disagreements happened in cases like this. Based on the observations of the first round annotation, we discussed and updated the manual and let the annotators rechecked their annotations. The final result is shown in the FINAL columns of Table 1. The agreement on description decreases. Annotators seem to be more conservative on labeling description as the dominant mode. The overall average F1-score increases to 0.78 and the Cohen’s Kappa is 0.72. This indicates that humans can reach an acceptable agreement on the dominant discourse mode of sentences after training. Discourse mode distribution: After the training phase, the annotators labeled the whole corpus. Figure 1 shows the distribution of dominant 115 Mode Nar Exp Des Emo Arg Nar 5285 11 2552 65 2 Exp 148 11 1 1 Des 2538 105 8 Emo 1947 63 Arg 318 Table 2: Co-occurrence of discourse modes in the same sentences. The numbers in diagonal indicate the number of sentences with a single mode. from \ to Nar Exp Des Emo Arg Nar 72% 17% 7% 1% Exp 59% 8% 8% 16% 6% Des 42% 53% 3% Emo 25% 2% 4% 66% 1% Arg 27% 4% 12% 54% Begin with 50% 3% 6% 32% 7% End with 12% 1% 2% 76% 6% Table 3: Transition between discourse modes of consecutive sentences and the distribution of discourse modes that essays begin with and end with. discourse modes. The distribution is imbalanced. Narration, description and emotion expressing are the main discourse modes in narrative essays, while exposition and argument are rare. Co-occurrence: Statistics show that 78% of sentences have only one discourse mode, and 19% have two discourse modes, and 3% have more than two discourse modes. Table 2 shows the co-occurrence of discourse modes. The numbers that are in the diagonal represent the distribution of discourse modes of sentences with only one mode. The numbers that are not in the diagonal indicate the co-occurrence of modes in the same sentences. We can see that description tends to co-occur with narration and emotion expressing. Description can provide states that happen together with events and emotion-evoking scenes are often described to elicit a strong emotional response, for example: The bright moon hanging on the distant sky reminds me of my hometown miles away. Emotion expressing and argument also co-occur in some cases. It is reasonable, since a successful emotional appeal can enhance the effectiveness of an argument. Generally, these observations are consistent with intuition. Properly combining multiple modes could produce impressive sentences. Transition: Table 3 shows the transition matrix between the dominant modes of consecutive sentences within the same paragraphs. All modes tend to transit to themselves except exposition, which is rare and usually brief. This means that discourse modes of adjacent sentences have high correlation. We also see that narration and emotion are more often at the beginning and the end of essays. The above observations indicate that discourse modes have local preferred patterns. To summarize, the implications of corpus analysis include: (1) Manual identification of discourse modes is feasible with an acceptable inter-annotator agreement; (2) The distribution of discourse modes in narrative essays is imbalanced; (3) About 22% sentences have multiple discourse modes; (4) Discourse modes have local transition patterns that consecutive discourse modes have high correlation. 4 Discourse Mode Identification based on Neural Sequence Labeling This section describes the proposed method for discourse mode identification. According to the corpus analysis, sentences often have multiple discourse modes and prefer local transition patterns. Therefore, we view this task as a multi-label sequence labeling problem. 4.1 Model We propose a hierarchical neural sequence labeling model to capture multiple level information. Figure 2(a) shows the basic architecture. We introduce it from the bottom up. Word level embedding layer: We transform words into continuous vectors, word embeddings. Vector representation of words is useful for capturing semantic relatedness. This should be effective in our case, since large amount of training data is not available. It is unrealistic to learn the embedding parameters on limited data so that we just look up embeddings of words from a pre-trained word embedding table. The pre-trained word embeddings were learned with the Word2Vec toolkit (Mikolov et al., 2013) on a domain corpus which consists of about 490,000 student essays. The embeddings are kept unchanged during learning and prediction. Sentence level GRU layer: Each sentence is a sequence of words. We feed the word embeddings into a forward recurrent neural networks. Here, 116 BiGRU BiGRU BiGRU Mul-Label Mul-Label Discourse level BiGRU layer GRU GRU GRU Ă Ă Ă Ă Ă Sentence level GRU layer s1 s2 sm w21 w22 w2n Word level Embeddings Discourse Modes Ă Mul-Label Ă Mul-Label (a) The basic hierarchical architecture. Ă Ă Ă BiGRU Ă s Ă Ă Fully connected Hidden Layer Fully connected Sigmoid ys,1 ys,2 ys,3 ys,4 ys,5 (b) The detail of the Mul-Label layer Figure 2: The multi-label neural sequence labeling model for discourse mode identification. we use the GRU (Cho et al., 2014) as the recurrent unit. The GRU is to make each recurrent unit to adaptively capture dependencies of different time scales. The output of the last time-step is used as the representation of a sentence. Discourse level bidirectional-GRU layer: An essay consists of a sequence of sentences. Accessing information of past and future sentences provides more contextual information for current prediction. Therefore, we use a bidirectional RNN to connect sentences. We use the GRU as the recurrent unit, which is also shown effective on semantic composition of documents for sentiment classification (Tang et al., 2015). The BiGRU represents the concatenation of the hidden states of the forward GRU and the backward GRU units. Multi-Label layer: Since one sentence can have more than one discourse mode, our model allows multiple label outputs. Figure 2(b) details the Mul-Label layer in Figure 2(a). The representation of each sentence after the bidirectional-GRU layer is first fully connected to a hidden layer. The hidden layer output is then fully connected to a five-way output layer, corresponding to five discourse modes. The sigmoid activation function is applied to each way to get the probability that whether corresponding discourse mode should be assigned to the sentence. In the training phase, the probability of any labeled discourse modes is set to 1 and the others are set to 0. In the prediction phase, if the predicted probability of a discourse mode is larger than 0.5, the discourse mode would be assigned. 4.1.1 Considering Paragraph Boundaries Different from NER that processes a single sentence each time, our task processes sequences of sentences in discourse, which are usually grouped by paragraphs to split the whole discourse into several relatively independent segments. Sentences from different paragraphs should have less effect to each other, even though they are adjacent. To capture paragraph boundary information, we insert an empty sentence at the end of every paragraph to indicate a paragraph boundary. The empty sentence is represented by a zero vector and its outputs are set to zeros as well. We expect this modification can better capture position related information. 4.2 Implementation Details We implement the model using the Keras library.2 The models are trained with the binary cross-entropy objective. The optimizer is Adam (Kingma and Ba, 2014). The word embedding dimension is 50. The dimension of the hidden layer in Mul-Label layer is 100. The length of sentences is fixed as 40. All other parameters are set by default parameter values. We adopt early stopping strategy (Caruana et al., 2000) to decide when the training process stops. 4.3 Evaluation 4.3.1 Data We use 100 essays as the test data. The remaining ones are used as the training data. 10% of the shuffled training data is used for validation. 4.3.2 Comparisons We compare the following systems: • SVM: We use bag of ngram (unigram and bigram) features to train a support vector classifier for sentence classification. 2https://github.com/fchollet/keras/ 117 • CNN: We implement a convolutional neural network (CNN) based method (Kim, 2014), as it is the state-of-the-art for sentence classification. • GRU: We use the sentence level representation in Figure 2(a) for sentence classification. • GRU-GRU(GG): This method is introduced in this paper in §4.1, but it doesn’t consider paragraph information. • GRU-GRU-SEG (GG-SEG): The model considers paragraph information on the top of GG as introduced in §4.1.1. The first three classification based methods classify sentences independently. To deal with multiple labels, the classifiers are trained for each discourse mode separately. At prediction time, if the classifier for any discourse mode predicts a sentence as positive, the corresponding discourse mode would be assigned. 4.3.3 Evaluation Results Table 4 shows the experimental results. We evaluate the systems for each discourse mode with F1score, which is the harmonic mean of precision and recall. The best performance is in bold. The SVM performs worst among all systems. The reason is due to the data sparseness and termmismatch problem, since the size of the annotated dataset is not big enough. In contrast, systems based on neural networks with pre-trained word embeddings achieve much better performance. The CNN and GRU have comparable performance. The GRU is slightly better. The two methods don’t consider the semantic representations of adjacent sentences. The GG and GG-SEG explore the semantic information of sentences in a sequence by the bidirectional GRU layer. The results demonstrate that considering such information improve the performance on all discourse modes. This proves the advantage of sequential identification compared with isolated sentence classification. We can see that the GG-SEG further improves the performance on three minority discourse modes compared with GG. This means that the minority modes may have stronger preference to special locations. Exposition benefits most, since many exposition sentences in our dataset are isolated. Model \ Mode Nar Des Emo Arg Exp SVM 0.672 0.588 0.407 0.152 0.095 CNN 0.793 0.764 0.594 0.333 0.293 GRU 0.800 0.784 0.615 0.402 0.364 GG 0.822 0.797 0.680 0.423 0.481 GG-SEG 0.815 0.791 0.717 0.483 0.667 Table 4: The F1-scores of systems on each discourse mode. The performance on argument is not so good. As we discussed in corpus analysis, argument and emotion expressing mode interact frequently. Because the amount of emotion expressing sentences is much more, distinguishing argument from them is hard. Actually, their functions in narrative essays seem to be similar that both are to deepen the author’s response or evoke the reader’s response to the story. The overall average F1-score can reach to 0.7 and the performance on identifying three most common discourse modes are consistent, with an average F1-score above 0.76 using the proposed neural sequence labeling models. Automatic discourse mode identification should be feasible. 5 Essay Scoring with Discourse Modes Discourse mode identification can potentially provide features for downstream NLP applications. This section describes our attempt to explore discourse modes for automatic essay scoring (AES). 5.1 Essay Scoring Framework We adopt the standard regression framework for essay scoring. We use support vector regression (SVR) and Bayesian linear ridge regression (BLRR), which are used in recent work (Phandi et al., 2015). The key is to design effective features. 5.2 Features The basic feature sets are based on (Phandi et al., 2015).The original feature sets include: • Length features • Part-Of-Speech (POS) features • Prompt features • Bag of words features We re-implement the feature extractors exactly according to the description in (Phandi et al., 2015) except for the POS features, since we don’t 118 Score Prompt #Essays Avg. len Range Median 1 4000 628 0-60 46 2 4000 660 0-50 41 3 3300 642 0-50 41 Table 5: Details of the three datasets for AES. have correct POS ngrams for Chinese. We complement two additional features: (1) The number of words occur in Chinese Proficiency Test 6 vocabulary; (2) The number of Chinese idioms used. We further design discourse mode related features for each essay: • Mode ratio: For each discourse mode, we compute its mode ratio according to ratio = #sentences with the discourse mode #sentences in the essay . Such features indicate the distribution of discourse modes. • Bag of ngrams of discourse modes: We use the number of unigrams and bigrams of the dominant discourse modes of the sequence of sentences in the essay as features. 5.3 Experimental Settings The experiments were conducted on narrative essays written by Chinese middle school students in native language during regional tests. There are three prompts and students are required to write an essay related to the given prompt with no less than 600 Chinese characters. All these essays were evaluated by professional teachers. We randomly sampled essays from each prompt for experiments. Table 5 shows the details of the datasets. We ran experiments on each prompt dataset respectively by 5-fold cross-validation. The GG-SEG model was used to identify discourse modes of sentences. Notice that a sentence can have multiple discourse modes. The mode ratio features are computed for each mode separately. When extracting the bag of ngrams of discourse modes features, the discourse mode with highest prediction probability was chosen as the dominant discourse mode. We use the Quadratic Weighted Kappa (QWK) as the evaluation metric. 5.4 Evaluation Results Table 6 shows the evaluation results of AES on three datasets. We can see that the BLRR algorithm performs better than the SVR algorithm. No QWK Score Prompt 1 2 3 SVR-Basic 0.554 0.468 0.457 + mode 0.6 0.501 0.481 BLRR-Basic 0.683 0.557 0.513 + mode 0.696 0.565 0.527 Table 6: Evaluation results of AES on three datasets. Basic: the basic feature sets; mode: discourse mode features. Prompt 1 2 3 Avg LEN 0.59 0.52 0.45 0.52 Des 0.23 0.24 0.24 0.24 Emo 0.09 0.15 0.12 0.12 Exp -0.07 0.01 0.01 -0.03 Arg -0.08 -0.06 -0.1 -0.08 Nar -0.11 -0.15 -0.12 -0.13 Table 7: Pearson correlation coefficients of mode ratio to essay score. LEN represents essay length. matter which algorithm is adopted, adding discourse mode features make positive contributions for AES compared with using basic feature sets. The trends are consistent over all three datasets. Impact of discourse mode ratio on scores: We are interested in which discourse mode correlates to essay scores best. Table 7 shows the Pearson correlation coefficient between the mode ratio and essay score. LEN represents the correlation of essay length and is listed as a reference. We can see that the ratio of narration has a negative correlation, which means just narrating stories without auxiliary discourse modes would lead to poor scores. The description mode ratio has the strongest positive correlation to essay scores. This may indicate that using vivid language to provide detail information is essential in writing narrative essays. Emotion expressing also has a positive correlation. It is reasonable since emotional writing can involve readers into the stories. The ratio of argument shows a negative correlation. The reason may be that: first, the identification of argument is not good enough; second, the existence of an argument doesn’t mean the quality of argumentation is good. Exposition has little effect on essay scores. Generally, the distribution of discourse modes shows correlations to the quality of essays. This may relate to the difficulties of manipulating different discourse modes. It is easy for students to use narration, but it is more difficult to manipulate description and emotion expressing well. As a result, the ability of descriptive and emotional writ119 100 200 400 600 Length threshold 0.40 0.45 0.50 0.55 0.60 0.65 QWK PROMPT 1 basic basic+mode 100 200 400 600 Length threshold 0.44 0.46 0.48 0.50 0.52 0.54 0.56 0.58 PROMPT 2 basic basic+mode 100 200 400 600 Length threshold 0.43 0.44 0.45 0.46 0.47 0.48 0.49 0.50 0.51 PROMPT 3 basic basic+mode Figure 3: QWK scores on essays satisfying different length thresholds on three prompts. Basic: the basic feature sets; mode: discourse mode features. ing should be an indicator of language proficiency and can better distinguish the quality of writing. Impact on scoring essays with various length: It is easy to understand that length is a strong indicator for essay scoring. It is interesting to study that when the effect of length becomes weaker, e.g., the lengths of essays are close, how does the performance of the AES system change? We conducted experiments on essays with various lengths. Only essays that the length is no less than a given threshold are selected for evaluation. The threshold is set to 100, 200, 400 and 600 Chinese characters respectively. We ran 5-fold crossvalidation with BLRR on the datasets after essay selection. Figure 3 shows the results on three datasets. We can see the following trends: (1) The QWK scores decrease along with shorter essays are removed gradually; (2) Adding discourse mode features always improves the performance; (3) As the threshold becomes larger, the improvements by adding discourse mode features become larger. The results indicate that the current AES system can achieve a high correlation score when the lengths of essays differ obviously. Even the simple features like length can judge that short essays tend to have low scores. However, when the lengths of essays are close, AES would face greater challenges, because it is required to deeper understand the properties of well written essays. In such situations, features that can model more advanced aspects of writing, such as discourse modes, should play a more important role. It should be also essential for evaluating essays written in the native language of the writer, when spelling and grammar are not big issues any more. 6 Conclusion This paper has introduced a fundamental but less studied task in NLP—discourse mode identification, which is designed in this work to automatically identify five discourse modes in essays. A corpus of narrative student essays was manually annotated with discourse modes at sentence level, with acceptable inter-annotator agreement. The corpus analysis revealed several aspects of characteristics of discourse modes including the distribution, co-occurrence and transition patterns. Considering these characteristics, we proposed a neural sequence labeling approach for identifying discourse modes. The experimental results demonstrate that automatic discourse mode identification is feasible. We evaluated discourse mode features for automatic essay scoring and draw preliminary observations. Discourse mode features can make positive contributions, especially in challenging situations when simple surface features don’t work well. The ratio of description and emotion expressing is shown to be positively correlated to essay scores. In future, we plan to exploit discourse mode identification for providing novel features for more downstream NLP applications. Acknowledgements The research work is partially funded by the National High Technology Research and Development Program (863 Program) of China (No.2015AA015409), National Natural Science Foundation of China (No.61402304), Ministry of Education (No.14YJAZH046), Beijing Municipal Education Commission (KM201610028015, Connotation Development) and Beijing Advanced Innovation Center for Imaging Technology. 120 References Dimitrios Alikaniotis, Helen Yannakoudakis, and Marek Rei. 2016. Automatic text scoring using neural networks. In Proceedings of ACL 2016. pages 715–725. Omer Aristotle and George A Kennedy. 2006. On rhetoric: A theory of civic discourse. Oxford University Press. Yigal Attali and Jill Burstein. 2006. Automated essay scoring with e-rater R⃝v. 2. The Journal of Technology, Learning and Assessment 4(3). Alexander Bain. 1890. English composition and rhetoric. Longmans, Green & Company. Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: An entity-based approach. Computational Linguistics 34(1):1–34. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of machine learning research 3(Feb):1137–1155. Richard Reed Braddock, Richard Lloyd-Jones, and Lowell Schoer. 1963. Research in written composition. JSTOR. Eric Brill and Robert C Moore. 2000. An improved error model for noisy channel spelling correction. In Proceedings of ACL 2000. pages 286–293. Cleanth Brooks and Robert Penn Warren. 1958. Modern rhetoric. Harcourt, Brace. Jill Burstein. 2003. The e-rater R⃝scoring engine: Automated essay scoring with natural language processing. . Jill Burstein, Daniel Marcu, and Kevin Knight. 2003. Finding the write stuff: Automatic identification of discourse structure in student essays. Intelligent Systems, IEEE 18(1):32–39. Rich Caruana, Steve Lawrence, and Lee Giles. 2000. Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping. In Proceedings of NIPS 2000. pages 402–408. Hongbo Chen and Ben He. 2013. Automated essay scoring by maximizing human-machine agreement. In Proceedings of EMNLP 2013. pages 1741–1752. Jason PC Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. Transactions of the Association for Computational Linguistics 4:357–370. Kyunghyun Cho, Bart van Merri¨enboer Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of EMNLP 2014. pages 1724–1734. Alexander Clark, Chris Fox, and Shalom Lappin. 2013. The handbook of computational linguistics and natural language processing. John Wiley & Sons. Rosemary Clerehan and Rachelle Buchbinder. 2006. Toward a more valid account of functional text quality: The case of the patient information leaflet. Text & Talk-An Interdisciplinary Journal of Language, Discourse Communication Studies 26(1):39–68. Jacob Cohen et al. 1960. A coefficient of agreement for nominal scales. Educational and psychological measurement 20(1):37–46. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12(Aug):2493–2537. Robert J Connors. 1981. The rise and fall of the modes of discourse. College Composition and Communication 32(4):444–455. Ronan Cummins, Meng Zhang, and Ted Briscoe. 2016. Constrained multi-task learning for automated essay scoring. In Proceedings of ACL 2016. pages 789– 799. Fei Dong and Yue Zhang. 2016. Automatic features for essay scoring – an empirical study. In Proceedings of EMNLP 2016. pages 1072–1077. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of ACL 2015. pages 334–343. Annemarie Friedrich, Alexis Palmer, and Manfred Pinkal. 2016. Situation entity types: automatic classification of clause-level aspect. In Proceedings of ACL 2016. pages 1757–1768. Alex Graves. 2012. Supervised sequence labelling. In Supervised Sequence Labelling with Recurrent Neural Networks, Springer Berlin Heidelberg, pages 5– 13. Barbara J Grosz, Scott Weinstein, and Aravind K Joshi. 1995. Centering: A framework for modeling the local coherence of discourse. Computational linguistics 21(2):203–225. Marti A Hearst. 1997. Texttiling: Segmenting text into multi-paragraph subtopic passages. Computational linguistics 23(1):33–64. Jerry R Hobbs. 1979. Coherence and coreference. Cognitive science 3(1):67–90. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. John Hutchins. 1977. On the structure of scientific texts. UEA Papers in Linguistics 5(3):18–39. 121 Yangfeng Ji and Jacob Eisenstein. 2014. Representation learning for text-level discourse parsing. In Proceedings of ACL 2014. pages 13–24. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of EMNLP 2014. pages 1746–1751. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . Leah S Larkey. 1998. Automatic essay grading using text categorization techniques. In Proceedings of SIGIR 1998. pages 90–95. Annie Louis and Ani Nenkova. 2013. What makes writing great? first experiments on article quality prediction in the science journalism domain. Transactions of the Association for Computational Linguistics 1:341–352. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In Proceedings of ACL 2016. pages 1064–1074. William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text-Interdisciplinary Journal for the Study of Discourse 8(3):243–281. Kleio-Isidora Mavridou, Annemarie Friedrich, Melissa Peate Sørensen, Alexis Palmer, and Manfred Pinkal. 2015. Linking discourse modes and situation entity types in a cross-linguistic corpus study. In Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics (LSDSem). page 12. Jos´e Luiz Meurer. 2002. Genre as diversity, and rhetorical mode as unity in language use. Ilha do Desterro A Journal of English Language, Literatures in English and Cultural Studies (43):061–082. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS 2013. pages 3111–3119. Alexis Palmer, Elias Ponvert, Jason Baldridge, and Carlota Smith. 2007. A sequencing model for situation entity classification. In Proceedings of ACL 2007. pages 896–903. Isaac Persing, Alan Davis, and Vincent Ng. 2010. Modeling organization in student essays. In Proceedings of EMNLP 2010. pages 229–239. Peter Phandi, Kian Ming A. Chai, and Hwee Tou Ng. 2015. Flexible domain adaptation for automated essay scoring using correlated linear regression. In Proceedings of EMNLP 2015. pages 431–439. Alla Rozovskaya and Dan Roth. 2010. Generating confusion sets for context-sensitive error correction. In Proceedings of EMNLP 2010. pages 961–970. Carlota S Smith. 2003. Modes of discourse: The local structure of texts, volume 103. Cambridge University Press. Wei Song, Ruiji Fu, Lizhen Liu, and Ting Liu. 2015. Discourse element identification in student essays based on global and local cohesion. In Proceedings of EMNLP 2015. pages 2255–2261. Kaveh Taghipour and Hwee Tou Ng. 2016. A neural approach to automated essay scoring. In Proceedings of EMNLP 2016. pages 1882–1891. Duyu Tang, Bing Qin, and Ting Liu. 2015. Document modeling with gated recurrent neural network for sentiment classification. In Proceedings of EMNLP 2015. pages 1422–1432. Simone Teufel and Marc Moens. 2002. Summarizing scientific articles: experiments with relevance and rhetorical status. Computational linguistics 28(4):409–445. Bonnie Webber, Markus Egg, and Valia Kordoni. 2011. Discourse structure and language technology. Natural Language Engineering 18(4):437–490. Nianwen Xue and Yuchen Zhang. 2014. Buy one get one free: Distant annotation of chinese tense, event type and modality. In Proceedings of LREC 2014. pages 1412–1416. Boshi Zhu. 1983.  Š V Ø(An Introduction to Writing). Wuhan, Hubei Educational Press. 122
2017
11
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1193–1203 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1110 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1193–1203 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1110 Adversarial Multi-Criteria Learning for Chinese Word Segmentation Xinchi Chen, Zhan Shi, Xipeng Qiu∗, Xuanjing Huang Shanghai Key Laboratory of Intelligent Information Processing, Fudan University School of Computer Science, Fudan University 825 Zhangheng Road, Shanghai, China {xinchichen13,zshi16,xpqiu,xjhuang}@fudan.edu.cn Abstract Different linguistic perspectives causes many diverse segmentation criteria for Chinese word segmentation (CWS). Most existing methods focus on improve the performance for each single criterion. However, it is interesting to exploit these different criteria and mining their common underlying knowledge. In this paper, we propose adversarial multi-criteria learning for CWS by integrating shared knowledge from multiple heterogeneous segmentation criteria. Experiments on eight corpora with heterogeneous segmentation criteria show that the performance of each corpus obtains a significant improvement, compared to single-criterion learning. Source codes of this paper are available on Github1. 1 Introduction Chinese word segmentation (CWS) is a preliminary and important task for Chinese natural language processing (NLP). Currently, the state-ofthe-art methods are based on statistical supervised learning algorithms, and rely on a large-scale annotated corpus whose cost is extremely expensive. Although there have been great achievements in building CWS corpora, they are somewhat incompatible due to different segmentation criteria. As shown in Table 1, given a sentence “姚明进 入总决赛(YaoMing reaches the final)”, the two commonly-used corpora, PKU’s People’s Daily (PKU) (Yu et al., 2001) and Penn Chinese Treebank (CTB) (Fei, 2000), use different segmentation criteria. In a sense, it is a waste of resources if we fail to fully exploit these corpora. ∗Corresponding author. 1https://github.com/FudanNLP Corpora Yao Ming reaches the final CTB 姚明 进入 总决赛 PKU 姚 明 进入 总 决赛 Table 1: Illustration of the different segmentation criteria. Recently, some efforts have been made to exploit heterogeneous annotation data for Chinese word segmentation or part-of-speech tagging (Jiang et al., 2009; Sun and Wan, 2012; Qiu et al., 2013; Li et al., 2015, 2016). These methods adopted stacking or multi-task architectures and showed that heterogeneous corpora can help each other. However, most of these model adopt the shallow linear classifier with discrete features, which makes it difficult to design the shared feature spaces, usually resulting in a complex model. Fortunately, recent deep neural models provide a convenient way to share information among multiple tasks (Collobert and Weston, 2008; Luong et al., 2015; Chen et al., 2016). In this paper, we propose an adversarial multicriteria learning for CWS by integrating shared knowledge from multiple segmentation criteria. Specifically, we regard each segmentation criterion as a single task and propose three different shared-private models under the framework of multi-task learning (Caruana, 1997; Ben-David and Schuller, 2003), where a shared layer is used to extract the criteria-invariant features, and a private layer is used to extract the criteria-specific features. Inspired by the success of adversarial strategy on domain adaption (Ajakan et al., 2014; Ganin et al., 2016; Bousmalis et al., 2016), we further utilize adversarial strategy to make sure the shared layer can extract the common underlying and criteria-invariant features, which are suitable for all the criteria. Finally, we exploit the eight segmentation criteria on the five simplified Chi1193 nese and three traditional Chinese corpora. Experiments show that our models are effective to improve the performance for CWS. We also observe that traditional Chinese could benefit from incorporating knowledge from simplified Chinese. The contributions of this paper could be summarized as follows. • Multi-criteria learning is first introduced for CWS, in which we propose three sharedprivate models to integrate multiple segmentation criteria. • An adversarial strategy is used to force the shared layer to learn criteria-invariant features, in which an new objective function is also proposed instead of the original cross-entropy loss. • We conduct extensive experiments on eight CWS corpora with different segmentation criteria, which is by far the largest number of datasets used simultaneously. 2 General Neural Model for Chinese Word Segmentation Chinese word segmentation task is usually regarded as a character based sequence labeling problem. Specifically, each character in a sentence is labeled as one of L = {B, M, E, S}, indicating the begin, middle, end of a word, or a word with single character. There are lots of prevalent methods to solve sequence labeling problem such as maximum entropy Markov model (MEMM), conditional random fields (CRF), etc. Recently, neural networks are widely applied to Chinese word segmentation task for their ability to minimize the effort in feature engineering (Zheng et al., 2013; Pei et al., 2014; Chen et al., 2015a,b). Specifically, given a sequence with n characters X = {x1, . . . , xn}, the aim of CWS task is to figure out the ground truth of labels Y ∗= {y∗ 1, . . . , y∗ n}: Y ∗= arg max Y ∈Ln p(Y |X), (1) where L = {B, M, E, S}. The general architecture of neural CWS could be characterized by three components: (1) a character embedding layer; (2) feature layers consisting of several classical neural networks and (3) a tag inference layer. The role of feature layers is to extract features, which could be either convolution neural network or recurrent neural network. In this Characters Embedding Layer Feature Layer Inference Layer B M E S y3 y2 y1 y4 x4 x1 x2 x3 Forward Backward Score Figure 1: General neural architecture for Chinese word segmentation. paper, we adopt the bi-directional long short-term memory neural networks followed by CRF as the tag inference layer. Figure 1 illustrates the general architecture of CWS. 2.1 Embedding layer In neural models, the first step usually is to map discrete language symbols to distributed embedding vectors. Formally, we lookup embedding vector from embedding matrix for each character xi as exi ∈Rde, where de is a hyper-parameter indicating the size of character embedding. 2.2 Feature layers We adopt bi-directional long short-term memory (Bi-LSTM) as feature layers. While there are numerous LSTM variants, here we use the LSTM architecture used by (Jozefowicz et al., 2015), which is similar to the architecture of (Graves, 2013) but without peep-hole connections. LSTM LSTM introduces gate mechanism and memory cell to maintain long dependency information and avoid gradient vanishing. Formally, LSTM, with input gate i, output gate o, forget gate f and memory cell c, could be expressed as:   ii oi fi ˜ci  =   σ σ σ ϕ   ( Wg⊺ [ exi hi−1 ] + bg ) , (2) ci = ci−1 ⊙fi + ˜ci ⊙ii, (3) hi = oi ⊙ϕ(ci), (4) where Wg ∈R(de+dh)×4dh and bg ∈R4dh are trainable parameters. dh is a hyper-parameter, in1194 dicating the hidden state size. Function σ(·) and ϕ(·) are sigmoid and tanh functions respectively. Bi-LSTM In order to incorporate information from both sides of sequence, we use bi-directional LSTM (Bi-LSTM) with forward and backward directions. The update of each Bi-LSTM unit can be written precisely as follows: hi = −→h i ⊕←−h i, (5) = Bi-LSTM(exi, −→h i−1, ←−h i+1, θ), (6) where −→h i and ←−h i are the hidden states at position i of the forward and backward LSTMs respectively; ⊕is concatenation operation; θ denotes all parameters in Bi-LSTM model. 2.3 Inference Layer After extracting features, we employ conditional random fields (CRF) (Lafferty et al., 2001) layer to inference tags. In CRF layer, p(Y |X) in Eq (1) could be formalized as: p(Y |X) = Ψ(Y |X) ∑ Y ′∈Ln Ψ(Y ′|X). (7) Here, Ψ(Y |X) is the potential function, and we only consider interactions between two successive labels (first order linear chain CRFs): Ψ(Y |X) = n ∏ i=2 ψ(X, i, yi−1, yi), (8) ψ(x, i, y′, y) = exp(s(X, i)y + by′y), (9) where by′y ∈R is trainable parameters respective to label pair (y′, y). Score function s(X, i) ∈R|L| assigns score for each label on tagging the i-th character: s(X, i) = W⊤ s hi + bs, (10) where hi is the hidden state of Bi-LSTM at position i; Ws ∈Rdh×|L| and bs ∈R|L| are trainable parameters. 3 Multi-Criteria Learning for Chinese Word Segmentation Although neural models are widely used on CWS, most of them cannot deal with incompatible criteria with heterogonous segmentation criteria simultaneously. Inspired by the success of multi-task learning (Caruana, 1997; Ben-David and Schuller, 2003; Liu et al., 2016a,b), we regard the heterogenous criteria as multiple “related” tasks, which could improve the performance of each other simultaneously with shared information. Formally, assume that there are M corpora with heterogeneous segmentation criteria. We refer Dm as corpus m with Nm samples: Dm = {(X(m) i , Y (m) i )}Nm i=1, (11) where Xm i and Y m i denote the i-th sentence and the corresponding label in corpus m. To exploit the shared information between these different criteria, we propose three sharing models for CWS task as shown in Figure 2. The feature layers of these three models consist of a private (criterion-specific) layer and a shared (criterioninvariant) layer. The difference between three models is the information flow between the task layer and the shared layer. Besides, all of these three models also share the embedding layer. 3.1 Model-I: Parallel Shared-Private Model In the feature layer of Model-I, we regard the private layer and shared layer as two parallel layers. For corpus m, the hidden states of shared layer and private layer are: h(s) i =Bi-LSTM(exi, −→h (s) i−1, ←−h (s) i+1, θs), (12) h(m) i =Bi-LSTM(exi, −→h (m) i−1, ←−h (m) i+1, θm), (13) and the score function in the CRF layer is computed as: s(m)(X, i) = W(m) s ⊤ [ h(s) i h(m) i ] + b(m) s , (14) where W(m) s ∈R2dh×|L| and b(m) s ∈R|L| are criterion-specific parameters for corpus m. 3.2 Model-II: Stacked Shared-Private Model In the feature layer of Model-II, we arrange the shared layer and private layer in stacked manner. The private layer takes output of shared layer as input. For corpus m, the hidden states of shared layer and private layer are: h(s) i = Bi-LSTM(exi, −→h (s) i−1, ←−h (s) i+1, θs), (15) h(m) i = Bi-LSTM( [ exi h(s) i ] , −→h (m) i−1, ←−h (m) i+1, θm) (16) and the score function in the CRF layer is computed as: s(m)(X, i) = W(m) s ⊤h(m) i + b(m) s , (17) where W(m) s ∈R2dh×|L| and b(m) s ∈R|L| are criterion-specific parameters for corpus m. 1195 CRF CRF Task A Task B X(A) X(B) Y(B) Y(A) (a) Model-I CRF CRF Task A Task B X(A) X(B) Y(B) Y(A) (b) Model-II CRF CRF Task A Task B X(A) X(B) Y(B) Y(A) (c) Model-III Figure 2: Three shared-private models for multi-criteria learning. The yellow blocks are the shared BiLSTM layer, while the gray block are the private Bi-LSTM layer. The yellow circles denote the shared embedding layer. The red information flow indicates the difference between three models. 3.3 Model-III: Skip-Layer Shared-Private Model In the feature layer of Model-III, the shared layer and private layer are in stacked manner as ModelII. Additionally, we send the outputs of shared layer to CRF layer directly. The Model III can be regarded as a combination of Model-I and Model-II. For corpus m, the hidden states of shared layer and private layer are the same with Eq (15) and (16), and the score function in CRF layer is computed as the same as Eq (14). 3.4 Objective function The parameters of the network are trained to maximize the log conditional likelihood of true labels on all the corpora. The objective function Jseg can be computed as: Jseg(Θm, Θs) = M ∑ m=1 Nm ∑ i=1 log p(Y (m) i |X(m) i ; Θm, Θs), (18) where Θm and Θs denote all the parameters in private and shared layers respectively. 4 Incorporating Adversarial Training for Shared Layer Although the shared-private model separates the feature space into shared and private spaces, there is no guarantee that sharable features do not exist in private feature space, or vice versa. Inspired by the work on domain adaptation (Ajakan et al., 2014; Ganin et al., 2016; Bousmalis et al., 2016), we hope that the features extracted by shared layer is invariant across the heterogonous segmentation criteria. Therefore, we jointly optimize the shared CRF CRF Task A Task B AVG Discriminator Shared-private Model X(A) X(B) Y(B) Y(A) A/B Softmax Linear Figure 3: Architecture of Model-III with adversarial training strategy for shared layer. The discriminator firstly averages the hidden states of shared layer, then derives probability over all possible criteria by applying softmax operation after a linear transformation. layer via adversarial training (Goodfellow et al., 2014). Therefore, besides the task loss for CWS, we additionally introduce an adversarial loss to prevent criterion-specific feature from creeping into shared space as shown in Figure 3. We use a criterion discriminator which aims to recognize which criterion the sentence is annotated by using the shared features. Specifically, given a sentence X with length n, we refer to h(s) X as shared features for X in one of the sharing models. Here, we compute h(s) X by simply averaging the hidden states of shared layer h(s) X = 1 n ∑n i h(s) xi . The criterion discriminator computes the probability p(·|X) over all criteria as: p(·|X; Θd, Θs) = softmax(W⊤ d h(s) X + bd), (19) 1196 where Θd indicates the parameters of criterion discriminator Wd ∈Rdh×M and bd ∈RM; Θs denotes the parameters of shared layers. 4.1 Adversarial loss function The criterion discriminator maximizes the cross entropy of predicted criterion distribution p(·|X) and true criterion. max Θd J 1 adv(Θd) = M ∑ m=1 Nm ∑ i=1 log p(m|X(m) i ; Θd, Θs). (20) An adversarial loss aims to produce shared features, such that a criterion discriminator cannot reliably predict the criterion by using these shared features. Therefore, we maximize the entropy of predicted criterion distribution when training shared parameters. max Θs J 2 adv(Θs) = M ∑ m=1 Nm ∑ i=1 H ( p(m|X(m) i ; Θd, Θs) ) , (21) where H(p) = −∑ i pi log pi is an entropy of distribution p. Unlike (Ganin et al., 2016), we use entropy term instead of negative cross-entropy. 5 Training Finally, we combine the task and adversarial objective functions. J (Θ; D) = Jseg(Θm, Θs) + J 1 adv(Θd) + λJ 2 adv(Θs), (22) where λ is the weight that controls the interaction of the loss terms and D is the training corpora. The training procedure is to optimize two discriminative classifiers alternately as shown in Algorithm 1. We use Adam (Kingma and Ba, 2014) with minibatchs to maximize the objectives. Notably, when using adversarial strategy, we firstly train 2400 epochs (each epoch only trains on eight batches from different corpora), then we only optimize Jseg(Θm, Θs) with Θs fixed until convergence (early stop strategy). 6 Experiments 6.1 Datasets To evaluate our proposed architecture, we experiment on eight prevalent CWS datasets from SIGHAN2005 (Emerson, 2005) and SIGHAN2008 (Jin and Chen, 2008). Table 2 gives the details of the eight datasets. Among these datasets, AS, CITYU and CKIP are traditional Chinese, while the Algorithm 1 Adversarial multi-criteria learning for CWS task. 1: for i = 1; i <= n_epoch; i + + do 2: # Train tag predictor for CWS 3: for m = 1; m <= M; m + + do 4: # Randomly pick data from corpus m 5: B = {X, Y }bm 1 ∈Dm 6: Θs += α∇ΘsJ (Θ; B) 7: Θm += α∇ΘmJ (Θ; B) 8: end for 9: # Train criterion discriminator 10: for m = 1; m <= M; m + + do 11: B = {X, Y }bm 1 ∈Dm 12: Θd += α∇ΘdJ (Θ; B) 13: end for 14: end for remains, MSRA, PKU, CTB, NCC and SXU, are simplified Chinese. We use 10% data of shuffled train set as development set for all datasets. 6.2 Experimental Configurations For hyper-parameter configurations, we set both the character embedding size de and the dimensionality of LSTM hidden states dh to 100. The initial learning rate α is set to 0.01. The loss weight coefficient λ is set to 0.05. Since the scale of each dataset varies, we use different training batch sizes for datasets. Specifically, we set batch sizes of AS and MSR datasets as 512 and 256 respectively, and 128 for remains. We employ dropout strategy on embedding layer, keeping 80% inputs (20% dropout rate). For initialization, we randomize all parameters following uniform distribution at (−0.05, 0.05). We simply map traditional Chinese characters to simplified Chinese, and optimize on the same character embedding matrix across datasets, which is pre-trained on Chinese Wikipedia corpus, using word2vec toolkit (Mikolov et al., 2013). Following previous work (Chen et al., 2015b; Pei et al., 2014), all experiments including baseline results are using pre-trained character embedding with bigram feature. 6.3 Overall Results Table 3 shows the experiment results of the proposed models on test sets of eight CWS datasets, which has three blocks. (1) In the first block, we can see that the performance is boosted by using Bi-LSTM, and the 1197 Datasets Words Chars Word Types Char Types Sents OOV Rate Sighan05 MSRA Train 2.4M 4.1M 88.1K 5.2K 86.9K Test 0.1M 0.2M 12.9K 2.8K 4.0K 2.60% AS Train 5.4M 8.4M 141.3K 6.1K 709.0K Test 0.1M 0.2M 18.8K 3.7K 14.4K 4.30% Sighan08 PKU Train 1.1M 1.8M 55.2K 4.7K 47.3K Test 0.2M 0.3M 17.6K 3.4K 6.4K CTB Train 0.6M 1.1M 42.2K 4.2K 23.4K Test 0.1M 0.1M 9.8K 2.6K 2.1K 5.55% CKIP Train 0.7M 1.1M 48.1K 4.7K 94.2K Test 0.1M 0.1M 15.3K 3.5K 10.9K 7.41% CITYU Train 1.1M 1.8M 43.6K 4.4K 36.2K Test 0.2M 0.3M 17.8K 3.4K 6.7K 8.23% NCC Train 0.5M 0.8M 45.2K 5.0K 18.9K Test 0.1M 0.2M 17.5K 3.6K 3.6K 4.74% SXU Train 0.5M 0.9M 32.5K 4.2K 17.1K Test 0.1M 0.2M 12.4K 2.8K 3.7K 5.12% Table 2: Details of the eight datasets. performance of Bi-LSTM cannot be improved by merely increasing the depth of networks. In addition, although the F value of LSTM model in (Chen et al., 2015b) is 97.4%, they additionally incorporate an external idiom dictionary. (2) In the second block, our proposed three models based on multi-criteria learning boost performance. Model-I gains 0.75% improvement on averaging F-measure score compared with BiLSTM result (94.14%). Only the performance on MSRA drops slightly. Compared to the baseline results (Bi-LSTM and stacked Bi-LSTM), the proposed models boost the performance with the help of exploiting information across these heterogeneous segmentation criteria. Although various criteria have different segmentation granularities, there are still some underlying information shared. For instance, MSRA and CTB treat family name and last name as one token “宁泽涛(NingZeTao)”, whereas some other datasets, like PKU, regard them as two tokens, “宁(Ning)” and “泽涛(ZeTao)”. The partial boundaries (before “宁(Ning)” or after “涛(Tao)”) can be shared. (3) In the third block, we introduce adversarial training. By introducing adversarial training, the performances are further boosted, and Model-I is slightly better than Model-II and Model-III. The adversarial training tries to make shared layer keep criteria-invariant features. For instance, as shown in Table 3, when we use shared information, the performance on MSRA drops (worse than baseline result). The reason may be that the shared parameters bias to other segmentation criteria and introduce noisy features into shared parameters. When we additionally incorporate the adversarial strategy, we observe that the performance on MSRA is improved and outperforms the baseline results. We could also observe the improvements on other datasets. However, the boost from the adversarial strategy is not significant. The main reason might be that the proposed three sharing models implicitly attempt to keep invariant features by shared parameters and learn discrepancies by the task layer. 6.4 Speed To further explore the convergence speed, we plot the results on development sets through epochs. Figure 4 shows the learning curve of Model-I without incorporating adversarial strategy. As shown in Figure 4, the proposed model makes progress gradually on all datasets. After about 1000 epochs, the performance becomes stable and convergent. We also test the decoding speed, and our models process 441.38 sentences per second averagely. As the proposed models and the baseline models (Bi-LSTM and stacked Bi-LSTM) are nearly in the same complexity, all models are nearly the same efficient. However, the time consumption of training process varies from model to model. For the models without adversarial training, it costs about 10 hours for training (the same for stacked Bi-LSTM to train eight datasets), whereas it takes about 16 hours for the models with adversarial training. All the experiments are conducted on the hardware with Intel(R) Xeon(R) CPU E52643 v3 @ 3.40GHz and NVIDIA GeForce GTX TITAN X. 1198 Models MSRA AS PKU CTB CKIP CITYU NCC SXU Avg. LSTM P 95.13 93.66 93.96 95.36 91.85 94.01 91.45 95.02 93.81 R 95.55 94.71 92.65 85.52 93.34 94.00 92.22 95.05 92.88 F 95.34 94.18 93.30 95.44 92.59 94.00 91.83 95.04 93.97 OOV 63.60 69.83 66.34 76.34 68.67 65.48 56.28 69.46 67.00 Bi-LSTM P 95.70 93.64 93.67 95.19 92.44 94.00 91.86 95.11 93.95 R 95.99 94.77 92.93 95.42 93.69 94.15 92.47 95.23 94.33 F 95.84 94.20 93.30 95.30 93.06 94.07 92.17 95.17 94.14 OOV 66.28 70.07 66.09 76.47 72.12 65.79 59.11 71.27 68.40 Stacked Bi-LSTM P 95.69 93.89 94.10 95.20 92.40 94.13 91.81 94.99 94.03 R 95.81 94.54 92.66 95.40 93.39 93.99 92.62 95.37 94.22 F 95.75 94.22 93.37 95.30 92.89 94.06 92.21 95.18 94.12 OOV 65.55 71.50 67.92 75.44 70.50 66.35 57.39 69.69 68.04 Multi-Criteria Learning Model-I P 95.67 94.44 94.93 95.95 93.99 95.10 92.54 96.07 94.84 R 95.82 95.09 93.73 96.00 94.52 95.60 92.69 96.08 94.94 F 95.74 94.76 94.33 95.97 94.26 95.35 92.61 96.07 94.89 OOV 69.89 74.13 72.96 81.12 77.58 80.00 64.14 77.05 74.61 Model-II P 95.74 94.60 94.82 95.90 93.51 95.30 92.26 96.17 94.79 R 95.74 95.20 93.76 95.94 94.56 95.50 92.84 95.95 94.94 F 95.74 94.90 94.28 95.92 94.03 95.40 92.55 96.06 94.86 OOV 69.67 74.87 72.28 79.94 76.67 81.05 61.51 77.96 74.24 Model-III P 95.76 93.99 94.95 95.85 93.50 95.56 92.17 96.10 94.74 R 95.89 95.07 93.48 96.11 94.58 95.62 92.96 96.13 94.98 F 95.82 94.53 94.21 95.98 94.04 95.59 92.57 96.12 94.86 OOV 70.72 72.59 73.12 81.21 76.56 82.14 60.83 77.56 74.34 Adversarial Multi-Criteria Learning Model-I+ADV P 95.95 94.17 94.86 96.02 93.82 95.39 92.46 96.07 94.84 R 96.14 95.11 93.78 96.33 94.70 95.70 93.19 96.01 95.12 F 96.04 94.64 94.32 96.18 94.26 95.55 92.83 96.04 94.98 OOV 71.60 73.50 72.67 82.48 77.59 81.40 63.31 77.10 74.96 Model-II+ADV P 96.02 94.52 94.65 96.09 93.80 95.37 92.42 95.85 94.84 R 95.86 94.98 93.61 95.90 94.69 95.63 93.20 96.07 94.99 F 95.94 94.75 94.13 96.00 94.24 95.50 92.81 95.96 94.92 OOV 72.76 75.37 73.13 82.19 77.71 81.05 62.16 76.88 75.16 Model-III+ADV P 95.92 94.25 94.68 95.86 93.67 95.24 92.47 96.24 94.79 R 95.83 95.11 93.82 96.10 94.48 95.60 92.73 96.04 94.96 F 95.87 94.68 94.25 95.98 94.07 95.42 92.60 96.14 94.88 OOV 70.86 72.89 72.20 81.65 76.13 80.71 63.22 77.88 74.44 Table 3: Results of proposed models on test sets of eight CWS datasets. There are three blocks. The first block consists of two baseline models: Bi-LSTM and stacked Bi-LSTM. The second block consists of our proposed three models without adversarial training. The third block consists of our proposed three models with adversarial training. Here, P, R, F, OOV indicate the precision, recall, F value and OOV recall rate respectively. The maximum F values in each block are highlighted for each dataset. 6.5 Error Analysis We further investigate the benefits of the proposed models by comparing the error distributions between the single-criterion learning (baseline model Bi-LSTM) and multi-criteria learning (Model-I and Model-I with adversarial training) as shown in Figure 5. According to the results, we could observe that a large proportion of points lie above diagonal lines in Figure 5a and Figure 5b, which implies that performance benefit from integrating knowledge and complementary information from other corpora. As shown in Table 3, on the test set of CITYU, the performance of Model-I and its adversarial version (Model-I+ADV) boost from 92.17% to 95.59% and 95.42% respectively. In addition, we observe that adversarial strategy is effective to prevent criterion specific features from creeping into shared space. For instance, the segmentation granularity of personal name is often different according to heterogenous criteria. With the help of adversarial strategy, our models could correct a large proportion of mistakes on personal name. Table 4 lists the examples from 2333-th and 89-th sentences in test sets of PKU and MSRA datasets respectively. 1199 0 500 1,000 1,500 2,000 2,500 90 92 94 96 epoches F-value(%) MSRA AS PKU CTB CKIP CITYU NCC SXU Figure 4: Convergence speed of Model-I without adversarial training on development sets of eight datasets. 40 50 60 70 80 90 100 40 50 60 70 80 90 100 Multi-Criteria Learning Base Line (a) 40 50 60 70 80 90 100 40 50 60 70 80 90 100 Multi-Criteria Learning + Adversary Base Line (b) Figure 5: F-measure scores on test set of CITYU dataset. Each point denotes a sentence, with the (x, y) values of each point denoting the F-measure scores of the two models, respectively. (a) is comparison between Bi-LSTM and Model-I. (b) is comparison between Bi-LSTM and Model-I with adversarial training. 7 Knowledge Transfer We also conduct experiments of whether the shared layers can be transferred to the other related tasks or domains. In this section, we investigate the ability of knowledge transfer on two experiments: (1) simplified Chinese to traditional Chinese and (2) formal texts to informal texts. 7.1 Simplified Chinese to Traditional Chinese Traditional Chinese and simplified Chinese are two similar languages with slightly difference on character forms (e.g. multiple traditional characters might map to one simplified character). We investigate that if datasets in traditional Chinese and simplified Chinese could help each other. Table 5 gives the results of Model-I on 3 traditioModels PKU-2333 MSRA-89 Golds Roh Moo-hyun Mu Ling Ying 卢 武铉 穆玲英 Base Line 卢武铉 穆 玲英 Model-I 卢武铉 穆 玲英 Modell-I+ADV 卢 武铉 穆玲英 Table 4: Segmentation cases of personal names. Models AS CKIP CITYU Avg. Baseline(Bi-LSTM) 94.20 93.06 94.07 93.78 Model-I∗ 94.12 93.24 95.20 94.19 Table 5: Performance on 3 traditional Chinese datasets. Model-I∗means that the shared parameters are trained on 5 simplified Chinese datasets and are fixed for traditional Chinese datasets. Here, we conduct Model-I without incorporating adversarial training strategy. nal Chinese datasets under the help of 5 simplified Chinese datasets. Specifically, we firstly train the model on simplified Chinese datasets, then we train traditional Chinese datasets independently with shared parameters fixed. As we can see, the average performance is boosted by 0.41% on F-measure score (from 93.78% to 94.19%), which indicates that shared features learned from simplified Chinese segmentation criteria can help to improve performance on traditional Chinese. Like MSRA, as AS dataset is relatively large (train set of 5.4M tokens), the features learned by shared parameters might bias to other datasets and thus hurt performance on such large dataset AS. 7.2 Formal Texts to Informal Texts 7.2.1 Dataset We use the NLPCC 2016 dataset2 (Qiu et al., 2016) to evaluate our model on micro-blog texts. The NLPCC 2016 data are provided by the shared task in the 5th CCF Conference on Natural Language Processing & Chinese Computing (NLPCC 2016): Chinese Word Segmentation and POS Tagging for micro-blog Text. Unlike the popular used newswire dataset, the NLPCC 2016 dataset is collected from Sina Weibo3, which consists of the informal texts from micro-blog with the various topics, such as finance, sports, entertainment, and so on. The information of the dataset is shown in Table 6. 2https://github.com/FudanNLP/ NLPCC-WordSeg-Weibo 3http://www.weibo.com/ 1200 Dataset Words Chars Word Types Char Types Sents OOV Rate Train 421,166 688,743 43,331 4,502 20,135 Dev 43,697 73,246 11,187 2,879 2,052 6.82% Test 187,877 315,865 27,804 3,911 8,592 6.98% Table 6: Statistical information of NLPCC 2016 dataset. Models P R F OOV Baseline(Bi-LSTM) 93.56 94.33 93.94 70.75 Model-I∗ 93.65 94.83 94.24 74.72 Table 7: Performances on the test set of NLPCC 2016 dataset. Model-I∗means that the shared parameters are trained on 8 Chinese datasets (Table 2) and are fixed for NLPCC dataset. Here, we conduct Model-I without incorporating adversarial training strategy. 7.2.2 Results Formal documents (like the eight datasets in Table 2) and micro-blog texts are dissimilar in many aspects. Thus, we further investigate that if the formal texts could help to improve the performance of micro-blog texts. Table 7 gives the results of Model-I on the NLPCC 2016 dataset under the help of the eight datasets in Table 2. Specifically, we firstly train the model on the eight datasets, then we train on the NLPCC 2016 dataset alone with shared parameters fixed. The baseline model is BiLSTM which is trained on the NLPCC 2016 dataset alone. As we can see, the performance is boosted by 0.30% on F-measure score (from 93.94% to 94.24%), and we could also observe that the OOV recall rate is boosted by 3.97%. It shows that the shared features learned from formal texts can help to improve the performance on of micro-blog texts. 8 Related Works There are many works on exploiting heterogeneous annotation data to improve various NLP tasks. Jiang et al. (2009) proposed a stacking-based model which could train a model for one specific desired annotation criterion by utilizing knowledge from corpora with other heterogeneous annotations. Sun and Wan (2012) proposed a structurebased stacking model to reduce the approximation error, which makes use of structured features such as sub-words. These models are unidirectional aid and also suffer from error propagation problem. Qiu et al. (2013) used multi-tasks learning framework to improve the performance of POS tagging on two heterogeneous datasets. Li et al. (2015) proposed a coupled sequence labeling model which could directly learn and infer two heterogeneous annotations. Chao et al. (2015) also utilize multiple corpora using coupled sequence labeling model. These methods adopt the shallow classifiers, therefore suffering from the problem of defining shared features. Our proposed models use deep neural networks, which can easily share information with hidden shared layers. Chen et al. (2016) also adopted neural network models for exploiting heterogeneous annotations based on neural multi-view model, which can be regarded as a simplified version of our proposed models by removing private hidden layers. Unlike the above models, we design three sharing-private architectures and keep shared layer to extract criterion-invariance features by introducing adversarial training. Moreover, we fully exploit eight corpora with heterogeneous segmentation criteria to model the underlying shared information. 9 Conclusions & Future Works In this paper, we propose adversarial multi-criteria learning for CWS by fully exploiting the underlying shared knowledge across multiple heterogeneous criteria. Experiments show that our proposed three shared-private models are effective to extract the shared information, and achieve significant improvements over the single-criterion methods. Acknowledgments We appreciate the contribution from Jingjing Gong and Jiacheng Xu. Besides, we would like to thank the anonymous reviewers for their valuable comments. This work is partially funded by National Natural Science Foundation of China (No. 61532011 and 61672162), Shanghai Municipal Science and Technology Commission on (No. 16JC1420401). 1201 References Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, and Mario Marchand. 2014. Domain-adversarial neural networks. arXiv preprint arXiv:1412.4446 . S. Ben-David and R. Schuller. 2003. Exploiting task relatedness for multiple task learning. Learning Theory and Kernel Machines pages 567–580. Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan. 2016. Domain separation networks. In Advances in Neural Information Processing Systems. pages 343– 351. Rich Caruana. 1997. Multitask learning. Machine learning 28(1):41–75. Jiayuan Chao, Zhenghua Li, Wenliang Chen, and Min Zhang. 2015. Exploiting heterogeneous annotations for weibo word segmentation and pos tagging. In National CCF Conference on Natural Language Processing and Chinese Computing. Springer, pages 495–506. Hongshen Chen, Yue Zhang, and Qun Liu. 2016. Neural network for heterogeneous annotations. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing . Xinchi Chen, Xipeng Qiu, Chenxi Zhu, and Xuanjing Huang. 2015a. Gated recursive neural network for chinese word segmentation. In Proceedings of Annual Meeting of the Association for Computational Linguistics.. Xinchi Chen, Xipeng Qiu, Chenxi Zhu, Pengfei Liu, and Xuanjing Huang. 2015b. Long short-term memory neural networks for chinese word segmentation. In EMNLP. pages 1197–1206. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of ICML. Thomas Emerson. 2005. The second international chinese word segmentation bakeoff. In Proceedings of the fourth SIGHAN workshop on Chinese language Processing. volume 133. XIA Fei. 2000. The part-of-speech tagging guidelines for the penn chinese treebank (3.0). URL: http://www. cis. upenn. edu/˜ chinese/segguide. 3rd. ch. pdf . Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. Journal of Machine Learning Research 17(59):1–35. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems. pages 2672–2680. Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850 . W. Jiang, L. Huang, and Q. Liu. 2009. Automatic adaptation of annotation standards: Chinese word segmentation and POS tagging: a case study. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing. pages 522–530. G. Jin and X. Chen. 2008. The fourth international chinese language processing bakeoff: Chinese word segmentation, named entity recognition and chinese pos tagging. In Sixth SIGHAN Workshop on Chinese Language Processing. page 69. Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. 2015. An empirical exploration of recurrent network architectures. In Proceedings of The 32nd International Conference on Machine Learning. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning. Zhenghua Li, Jiayuan Chao, Min Zhang, and Wenliang Chen. 2015. Coupled sequence labeling on heterogeneous annotations: Pos tagging as a case study. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Zhenghua Li, Jiayuan Chao, Min Zhang, and Jiwen Yang. 2016. Fast coupled sequence labeling on heterogeneous annotations via context-aware pruning. In Proceedings of EMNLP. Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2016a. Deep multi-task learning with shared memory. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2016b. Recurrent neural network for text classification with multi-task learning. In Proceedings of International Joint Conference on Artificial Intelligence. Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2015. Multi-task sequence to sequence learning. arXiv preprint arXiv:1511.06114 . 1202 Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 . Wenzhe Pei, Tao Ge, and Chang Baobao. 2014. Maxmargin tensor neural network for chinese word segmentation. In Proceedings of ACL. Xipeng Qiu, Peng Qian, and Zhan Shi. 2016. Overview of the NLPCC-ICCPOL 2016 shared task: Chinese word segmentation for micro-blog texts. In International Conference on Computer Processing of Oriental Languages. Springer, pages 901–906. Xipeng Qiu, Jiayi Zhao, and Xuanjing Huang. 2013. Joint chinese word segmentation and pos tagging on heterogeneous annotated corpora with multiple task learning. In EMNLP. pages 658–668. Weiwei Sun and Xiaojun Wan. 2012. Reducing approximation and estimation errors for chinese lexical processing with heterogeneous annotations. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long PapersVolume 1. pages 232–241. S. Yu, J. Lu, X. Zhu, H. Duan, S. Kang, H. Sun, H. Wang, Q. Zhao, and W. Zhan. 2001. Processing norms of modern Chinese corpus. Technical report, Technical report. Xiaoqing Zheng, Hanyang Chen, and Tianyu Xu. 2013. Deep learning for chinese word segmentation and pos tagging. In EMNLP. pages 647–657. 1203
2017
110
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1204–1214 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1111 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1204–1214 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1111 Neural Joint Model for Transition-based Chinese Syntactic Analysis Shuhei Kurita Daisuke Kawahara Graduate School of Informatics, Kyoto University {kurita, dk, kuro}@nlp.ist.i.kyoto-u.ac.jp Sadao Kurohashi Abstract We present neural network-based joint models for Chinese word segmentation, POS tagging and dependency parsing. Our models are the first neural approaches for fully joint Chinese analysis that is known to prevent the error propagation problem of pipeline models. Although word embeddings play a key role in dependency parsing, they cannot be applied directly to the joint task in the previous work. To address this problem, we propose embeddings of character strings, in addition to words. Experiments show that our models outperform existing systems in Chinese word segmentation and POS tagging, and perform preferable accuracies in dependency parsing. We also explore bi-LSTM models with fewer features. 1 Introduction Dependency parsers have been enhanced by the use of neural networks and embedding vectors (Chen and Manning, 2014; Weiss et al., 2015; Zhou et al., 2015; Alberti et al., 2015; Andor et al., 2016; Dyer et al., 2015). When these dependency parsers process sentences in English and other languages that use symbols for word separations, they can be very accurate. However, for languages that do not contain word separation symbols, dependency parsers are used in pipeline processes with word segmentation and POS tagging models, and encounter serious problems because of error propagations. In particular, Chinese word segmentation is notoriously difficult because sentences are written without word dividers and Chinese words are not clearly defined. Hence, the pipeline of word segmentation, POS tagging and dependency parsing always suffers from word segmentation errors. Once words have been wronglysegmented, word embeddings and traditional onehot word features, used in dependency parsers, will mistake the precise meanings of the original sentences. As a result, pipeline models achieve dependency scores of around 80% for Chinese. A traditional solution to this error propagation problem is to use joint models. Many Chinese words play multiple grammatical roles with only one grammatical form. Therefore, determining the word boundaries and the subsequent tagging and dependency parsing are closely correlated. Transition-based joint models for Chinese word segmentation, POS tagging and dependency parsing are proposed by Hatori et al. (2012) and Zhang et al. (2014). Hatori et al. (2012) state that dependency information improves the performances of word segmentation and POS tagging, and develop the first transition-based joint word segmentation, POS tagging and dependency parsing model. Zhang et al. (2014) expand this and find that both the inter-word dependencies and intraword dependencies are helpful in word segmentation and POS tagging. Although the models of Hatori et al. (2012) and Zhang et al. (2014) perform better than pipeline models, they rely on the one-hot representation of characters and words, and do not assume the similarities among characters and words. In addition, not only words and characters but also many incomplete tokens appear in the transitionbased joint parsing process. Such incomplete or unknown words (UNK) could become important cues for parsing, but they are not listed in dictionaries or pre-trained word embeddings. Some recent studies show that character-based embeddings are effective in neural parsing (Ballesteros et al., 2015; Zheng et al., 2015), but their models could not be directly applied to joint models because they use given word segmentations. To solve 1204 these problems, we propose neural network-based joint models for word segmentation, POS tagging and dependency parsing. We use both character and word embeddings for known tokens and apply character string embeddings for unknown tokens. Another problem in the models of Hatori et al. (2012) and Zhang et al. (2014) is that they rely on detailed feature engineering. Recently, bidirectional LSTM (bi-LSTM) based neural network models with very few feature extraction are proposed (Kiperwasser and Goldberg, 2016; Cross and Huang, 2016). In their models, the bi-LSTM is used to represent the tokens including their context. Indeed, such neural networks can observe whole sentence through the bi-LSTM. This biLSTM is similar to that of neural machine translation models of Bahdanau et al. (2014). As a result, Kiperwasser and Goldberg (2016) achieve competitive scores with the previous state-of-theart models. We also develop joint models with ngram character string bi-LSTM. In the experiments, we obtain state-of-the-art Chinese word segmentation and POS tagging scores, and the pipeline of the dependency model achieves the better dependency scores than the previous joint models. To the best of our knowledge, this is the first model to use embeddings and neural networks for Chinese full joint parsing. Our contributions are summarized as follows: (1) we propose the first embedding-based fully joint parsing model, (2) we use character string embeddings for UNK and incomplete tokens. (3) we also explore bi-LSTM models to avoid the detailed feature engineering in previous approaches. (4) in experiments using Chinese corpus, we achieve state-of-the-art scores in word segmentation, POS tagging and dependency parsing. 2 Model All full joint parsing models we present in this paper use the transition-based algorithm in Section 2.1 and the embeddings of character strings in Section 2.2. We present two neural networks: the feed-forward neural network models in Section 2.3 and the bi-LSTM models in Section 2.4. 2.1 Transition-based Algorithm for Joint Segmentation, POS Tagging, and Dependency Parsing Based on Hatori et al. (2012), we use a modified arc-standard algorithm for character transi技术有了新的进展。 新的进展。 Stack (word-based) Buffer (character-based) SH RL SH 技术 RR 了 AP SH Technology have made new progress. Left children (word-based) Right children (word-based) Transitions History: 有 Figure 1: Transition-based Chinese joint model for word segmentation, POS tagging and dependency parsing. tions (Figure 1). The model consists of one buffer and one stack. The buffer contains characters in the input sentence, and the stack contains words shifted from the buffer. The stack words may have their child nodes. The words in the stack are formed by the following transition operations. • SH(t) (shift): Shift the first character of the buffer to the top of the stack as a new word. • AP (append): Append the first character of the buffer to the end of the top word of the stack. • RR (reduce-right): Reduce the right word of the top two words of the stack, and make the right child node of the left word. • RL (reduce-left): Reduce the left word of the top two words of the stack, and make the left child node of the right word. The RR and RL operations are the same as those of the arc-standard algorithm (Nivre, 2004a). SH makes a new word whereas AP makes the current word longer by adding one character. The POS tags are attached with the SH(t) transition. In this paper, we explore both greedy models and beam decoding models. This parsing algorithm works in both types. We also develop a joint model of word segmentation and POS tagging, along with a dependency parsing model. The joint model of word segmentation and POS tagging does not have RR and RL transitions. 2.2 Embeddings of Character Strings First, we explain the embeddings used in the neural networks. Later, we explain details of the neural networks in Section 2.3 and 2.4. 1205 Both meaningful words and incomplete tokens appear during transition-based joint parsing. Although embeddings of incomplete tokens are not used in previous work, they could become useful features in several cases. For example, “南京 东路” (Nanjing East Road, the famous shopping street of Shanghai) is treated as a single Chinese word in the Penn Chinese Treebank (CTB) corpus. There are other named entities of this form in CTB, e.g, “北京西路” (Beijing West Road) and “湘西路” (Hunan West Road). In these cases, “南京” (Nanjing) and “北京” (Beijing) are location words, while “东路” (East Road) and “西 路” (West Road) are sub-words. “东路” and “西 路” are similar in terms of their character composition and usage, which is not sufficiently considered in the previous work. Moreover, representations of incomplete tokens are helpful for compensating the segmentation ambiguity. Suppose that the parser makes over-segmentation errors and segments “南京东路” to “南京” and “东 路”. In this case, “东路” becomes UNK. However, the models could infer that “东路” is also a location, from its character composition and neighboring words. This could give models robustness of segmentation errors. In our models, we prepare the word and character embeddings in the pretraining. We also use the embeddings of character strings for sub-words and UNK which are not in the pre-trained embeddings. The characters and words are embedded in the same vector space during pre-training. We prepare the same training corpus with the segmented word files and the segmented character files. Both files are concatenated and learned by word2vec (Mikolov et al., 2013). We use the embeddings of 1M frequent words and characters. Words and characters that are in the training set and do not have pre-trained embeddings are given randomly initialized embeddings. The development set and the test set have out-of-vocabulary (OOV) tokens for these embeddings. The embeddings of the unknown character strings are generated in the neural computation graph when they are required. Consider a character string c1c2 · · · cn consisting of characters ci. When this character string is not in the pretrained embeddings, the model obtains the embeddings v(c1c2 · · · cn) by the mean of each character embeddings Pn i=1 v(ci). Embeddings of words, characters and character strings have the same diWord embeddings Character embeddings mean Embedding layer Hidden layer 1 Hidden layer 2 ReLU ReLU Character Strings softmax pgreedy t ρ Greedy output Beam output Figure 2: The feed-forward neural network model. The greedy output is obtained at the second top layer, while the beam decoding output is obtained at the top layer. The input character strings are translated into word embeddings if the embeddings of the character strings are available. Otherwise, the embeddings of the character strings are used. mension and are chosen in the neural computation graph. We avoid using the “UNK” vector as far as possible, because this degenerates the information about unknown tokens. However, models use the “UNK” vector if the parser encounters characters that are not in the pre-trained embeddings, though this is quite uncommon. 2.3 Feed-forward Neural Network 2.3.1 Neural Network We present a feed-forward neural network model in Figure 2. The neural network for greedy training is based on the neural networks of Chen and Manning (2014) and Weiss et al. (2015). We add the dynamic generation of the embeddings of character strings for unknown tokens, as described in Section 2.2. This neural network has two hidden layers with 8,000 dimensions. This is larger than Chen and Manning (2014) (200 dimensions) or Weiss et al. (2015) (1,024 or 2,048 dimensions). We use the ReLU for the activation function of the hidden layers (Nair and Hinton, 2010) and the softmax function for the output layer of the greedy 1206 Type Value Size of h1,h2 8,000 Initial learning rate 0.01 Initial learning rate of beam decoding 0.001 Embedding vocabulary size 1M Embedding vector size 200 Small embedding vector size 20 Minibatch size 200 Table 1: Parameters for neural network structure and training. neural network. There are three randomly initialized weight matrices between the embedding layers and the softmax function. The loss function L(θ) for the greedy training is L(θ) = − X s,t log pgreedy s,t + λ 2 ||θ||2, pgreedy s,t (β) ∝exp  X j wtjβj + bt  , where t denotes one transition among the transition set T ( t ∈T ). s denotes one element of the single mini-batch. β denotes the output of the previous layer. w and b denote the weight matrix and the bias term. θ contains all parameters. We use the L2 penalty term and the Dropout. The backprop is performed including the word and character embeddings. We use Adagrad (Duchi et al., 2010) to optimize learning rate. We also consider Adam (Kingma and Ba, 2015) and SGD, but find that Adagrad performs better in this model. The other learning parameters are summarized in Table 1. In our model implementation, we divide all sentences into training batches. Sentences in the same training batches are simultaneously processed by the neural mini-batches. By doing so, the model can parse all sentences of the training batch in the number of transitions required to parse the longest sentence in the batch. This allows the model to parse more sentences at once, as long as the neural mini-batch can be allocated to the GPU memory. This can be applied to beam decoding. 2.3.2 Features The features of this neural network are listed in Table 2. We use three kinds of features: (1) features obtained from Hatori et al. (2012) by removing combinations of features, (2) features obtained from Chen and Manning (2014), (3) original features related to character strings. In particular, Type Features Stack word and tags s0w, s1w, s2w s0p, s1p, s2p Stack 1 children and tags s0l0w, s0r0w, s0l1w, s0r1w s0l0p, s0r0p, s0l1p, s0r1p Stack 2 children s1l0w, s1r0w, s1l1w, s1r1w Children of children s0l0lw, s0r0rw, s1l0lw, s1r0rw Buffer characters b0c, b1c, b2c, b3c Previously shifted words q0w, q1w Previously shifted tags q0p, q1p Character of q0 q0e Parts of q0 word q0f1, q0f2, q0f3 Strings across q0 and buf. q0b1, q0b2, q0b3 Strings of buffer characters b0-2, b0-3, b0-4 b1-3, b1-4, b1-5 b2-4, b2-5, b2-6 b3-5, b3-6 b4-6 Length of q0 lenq0 Table 2: Features for the joint model. “q0” denotes the last shifted word and “q1” denotes the word shifted before “q0”. In “part of q0 word”, “f1”, “f2” and “f3” denote sub-words of “q0”, which are 1, 2 and 3 sequential characters including the last character of “q0” respectively. In “strings across q0 and buf.”, “q0bX” denotes “q0” and X sequential characters of the buffer. This feature could capture words that boundaries have not determined yet. In “strings of buffer characters”, “bX-Y” denotes sequential characters from the Xth to Y -th character of the buffer. The suffix “e” denotes the end character of the word. The dimension of the embedding of “length of q0” is 20. the original features include sub-words, character strings across the buffer and the stack, and character strings in the buffer. Character strings across the buffer and stack could capture the currentlysegmented word. To avoid using character strings that are too long, we restrict the length of character string to a maximum of four characters. Unlike Hatori et al. (2012), we use sequential characters of sentences for features, and avoid handengineered combinations among one-hot features, because such combinations could be automatically generated in the neural hidden layers as distributed representations (Hinton et al., 1986). In the later section, we evaluate a joint model for word segmentation and POS tagging. This model does not use the children and children-ofchildren of stack words as features. 1207 2.3.3 Beam Search Structured learning plays an important role in previous joint parsing models for Chinese.1 In this paper, we use the structured learning model proposed by Weiss et al. (2015) and Andor et al. (2016). In Figure 2, the output layer for the beam decoding is at the top of the network. There are a perceptron layer which has inputs from the two hidden layers and the greedy output layer: [h1, h2, pgreedy(y)]. This layer is learned by the following cost function (Andor et al., 2016): L(d∗ 1:j; θ) = − j X i=1 ρ(d∗ 1:i−1, d∗ i ; θ) + ln X d′ 1:j∈B1:j exp j X i=1 ρ(d′ 1:i−1, d′ i; θ), where d1:j denotes the transition path and d∗ 1:j denotes the gold transition path. B1:j is the set of transition paths from 1 to j step in beam. ρ is the value of the top layer in Figure 2. This training can be applied throughout the network. However, we separately train the last beam layer and the previous greedy network in practice, as in Andor et al. (2016). First, we train the last perceptron layer using the beam cost function freezing the previous greedy-trained layers. After the last layer has been well trained, backprop is performed including the previous layers. We notice that training the embedding layer at this stage could make the results worse, and thus we exclude it. Note that this whole network backprop requires considerable GPU memory. Hence, we exclude particularly large batches from the training, because they cannot be on GPU memory. We use multiple beam sizes for training because models can be trained faster with small beam sizes. After the small beam size training, we use larger beam sizes. The test of this fully joint model takes place with a beam size of 16. Hatori et al. (2012) use special alignment steps in beam decoding. The AP transition has size-2 steps, whereas the other transitions have a size-1 step. Using this alignment, the total number of steps for an N-character sentence is guaranteed to be 2N −1 (excluding the root arc) for any transition path. This can be interpreted as the AP transition doing two things: appending characters and 1Hatori et al. (2012) report that structured learning with a beam size of 64 is optimal. resolving intra-word dependencies. This alignment stepping assumes that the intra-word dependencies of characters to the right of the characters exist in each Chinese word. 2.4 Bi-LSTM Model In Section 2.3, we describe a neural network model with feature extraction. Unfortunately, although this model is fast and very accurate, it has two problems: (1) the neural network cannot see the whole sentence information. (2) it relies on feature engineering. To solve these problems, Kiperwasser and Goldberg (2016) propose a bi-LSTM neural network parsing model. Surprisingly, their model uses very few features, and bi-LSTM is applied to represent the context of the features. Their neural network consists of three parts: bi-LSTM, a feature extraction function and a multilayer perceptron (MLP). First, all tokens in the sentences are converted to embeddings. Second, the bi-LSTM reads all embeddings of the sentence. Third, the feature function extracts the feature representations of tokens from the bi-LSTM layer. Finally, an MLP with one hidden layer outputs the transition scores of the transition-based parser. In this paper, we propose a Chinese joint parsing model with simple and global features using n-gram bi-LSTM and a simple feature extraction function. The model is described in Figure 3. We consider that Chinese sentences consist of tokens, including words, UNKs and incomplete tokens, which can have some meanings and are useful for parsing. Such tokens appear in many parts of the sentence and have arbitrary lengths. To capture them, we propose the n-gram bi-LSTM. The n-gram bi-LSTM read through characters ci · · · ci+n−1 of the sentence (ci is the i-th character). For example, the 1-gram bi-LSTM reads each character, and the 2-gram bi-LSTM reads two consecutive characters cici+1. After the n-gram forward LSTM reads character string ci · · · ci+n−1, it next reads ci+1 · · · ci+n. The backward LSTM reads from ci+1 · · · ci+n toward ci · · · ci+n−1. This allows models to capture any n-gram character strings in the input sentence.2 All n-gram inputs to bi-LSTM are given by the embeddings of words and characters or the dynamically generated embeddings of character strings, as described in 2At the end of the sentence of length N, character strings ci · · · cN(N < i+n−1), which are shorter than n characters, are used. 1208 技 术 有 了 新 了新的进展。 Stack (word-based) Buffer (character-based) 技术 有 技术 术有 有了 了新 新的 技术有 术有了 有了新 了新的 新的进 技术有了 术有了新 有了新的 了新的进 新的进展 LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM 1-gram MLP s1 s0 b0 s2 s2_is_NULL 2-gram 3-gram 4-gram bi-LSTM bi-LSTM bi-LSTM bi-LSTM s1 s0 b0 s2 softmax concat concat concat pgreedy t Word Embeddings of LSTM LSTM a) b) Character Strings Bi-LSTM 技术 技术有了 Embeddings LSTM LSTM mean Bi-LSTM (Technology) (Technology have made) Figure 3: The bi-LSTM model. (a): The Chinese sentence “技术有了新的进展。” has been processed. (b): Similar to the feed-forward neural network model, the embeddings of words, characters and character strings are used. In this figure, a word “技术”(technology) has its embedding, while a token “技术有了”(technology have made) does not. Section 2.2. Although these arbitrary n-gram tokens produce UNKs, character string embeddings can capture similarities among them. Following the bi-LSTM layer, the feature function extracts the corresponding outputs of the bi-LSTM layer. We summarize the features in Table 3. Finally, MLP and the softmax function outputs the transition probability. We use an MLP with three hidden layers as for the model in Section 2.3. We train this neural network with the loss function for the greedy training. Model Features 4 features s0w, s1w, s2w, b0c 8 features s0w, s1w, s2w, b0c s0r0w, s0l0w, s1r0w, s1l0w Table 3: Features for the bi-LSTM models. All features are words and characters. We experiment both four and eight features models. #snt #oov CTB-5 Train 18k Dev. 350 553 Test 348 278 CTB-7 Train 31k Dev. 10k 13k Test 10k 13k Table 4: Summary of datasets. 3 Experiments 3.1 Experimental Settings We use the Penn Chinese Treebank 5.1 (CTB5) and 7 (CTB-7) datasets to evaluate our models, following the splitting of Jiang et al. (2008) for CTB-5 and Wang et al. (2011) for CTB-7. The statistics of datasets are presented in Table 4. We use the Chinese Gigaword Corpus for embedding pre-training. Our model is developed for unlabeled dependencies. The development set is used for parameter tuning. Following Hatori et al. (2012) and Zhang et al. (2014), we use the standard word-level evaluation with F1-measure. The POS tags and dependencies cannot be correct unless the corresponding words are correctly segmented. We trained three models: SegTag, SegTagDep and Dep. SegTag is the joint word segmentation and POS tagging model. SegTagDep is the full joint segmentation, tagging and dependency parsing model. Dep is the dependency parsing model which is similar to Weiss et al. (2015) and Andor et al. (2016), but uses the embeddings of character strings. Dep compensates for UNKs and segmentation errors caused by previous word segmentation using embeddings of character strings. We will examine this effect later. Most experiments are conducted on GPUs, but some of beam decoding processes are performed on CPUs because of the large mini-batch size. The neural network is implemented with Theano. 1209 Model Seg POS Hatori+12 SegTag 97.66 93.61 Hatori+12 SegTag(d) 98.18 94.08 Hatori+12 SegTagDep 97.73 94.46 Hatori+12 SegTagDep(d) 98.26 94.64 M. Zhang+14 EAG 97.76 94.36 Y. Zhang+15 98.04 94.47 SegTag(g) 98.41 94.84 SegTag 98.60 94.76 Table 5: Joint segmentation and POS tagging scores. Both scores are in F-measure. In Hatori et al. (2012), (d) denotes the use of dictionaries. (g) denotes greedy trained models. All scores for previous models are taken from Hatori et al. (2012), Zhang et al. (2014) and Zhang et al. (2015). 3.2 Results 3.2.1 Joint Segmentation and POS Tagging First, we evaluate the joint segmentation and POS tagging model (SegTag). Table 5 compares the performance of segmentation and POS tagging using the CTB-5 dataset. We train two modles: a greedy-trained model and a model trained with beams of size 4. We compare our model to three previous approaches: Hatori et al. (2012), Zhang et al. (2014) and Zhang et al. (2015). Our SegTag joint model is superior to these previous models, including Hatori et al. (2012)’s model with rich dictionary information, in terms of both segmentation and POS tagging accuracy. 3.2.2 Joint Segmentation, POS Tagging and Dependency Parsing Table 6 presents the results of our full joint model. We employ the greedy trained full joint model SegTagDep(g) and the beam decoding model SegTagDep. All scores for the existing models in this table are taken from Zhang et al. (2014). Though our model surpasses the previous best end-to-end joint models in terms of segmentation and POS tagging, the dependency score is slightly lower than the previous models. The greedy model SegTagDep(g) achieves slightly lower scores than beam models, although this model works considerably fast because it does not use beam decoding. Model Seg POS Dep Hatori+12 97.75 94.33 81.56 M. Zhang+14 EAG 97.76 94.36 81.70 SegTagDep(g) 98.24 94.49 80.15 SegTagDep 98.37 94.83 81.42 Table 6: Joint Segmentation, POS Tagging and Dependency Parsing. Hatori et al. (2012)’s CTB-5 scores are reported in Zhang et al. (2014). EAG in Zhang et al. (2014) denotes the arc-eager model. (g) denotes greedy trained models. Model Seg POS Dep Hatori+12 97.75 94.33 81.56 M. Zhang+14 STD 97.67 94.28 81.63 M. Zhang+14 EAG 97.76 94.36 81.70 Y. Zhang+15 98.04 94.47 82.01 SegTagDep(g) 98.24 94.49 80.15 SegTagDep 98.37 94.83‡ 81.42‡ SegTag+Dep 98.60‡ 94.76‡ 82.60‡ Table 7: The SegTag+Dep model. Note that the model of Zhang et al. (2015) requires other base parsers. ‡ denotes that the improvement is statistically siginificant at p < 0.01 compared with SegTagDep(g) using paired t-test. 3.2.3 Pipeline of Our Joint SegTag and Dep Model We use our joint SegTag model for the pipeline input of the Dep model (SegTag+Dep). Both SegTag and Dep models are trained and tested by the beam cost function with beams of size 4. Table 7 presents the results. Our SegTag+Dep model performs best in terms of the dependency and word segmentation. The SegTag+Dep model is better than the full joint model. This is because most segmentation errors of these models occur around named entities. Hatori et al. (2012)’s alignment step assumes the intra-word dependencies in words, while named entities do not always have them. For example, SegTag+Dep model treats named entity “海赛克”, a company name, as one word, while the SegTagDep model divides this to “海” (sea) and “赛克”, where “赛克” could be used for foreigner’s name. For such words, SegTagDep prefers SH because AP has size-2 step of the character appending and intra-word dependency resolution, which does not exist for named entities. This problem could be solved by adding a special transition AP_named_entity which is similar to AP but with size-1 step and used 1210 Model Dep Dep(g)-cs 80.51 Dep(g) 80.98 Table 8: SegTag+Dep(g) model with and without character strings (cs) representations. Note that we compare these models with greedy training for simplicity’s sake. only for named entities. Additionally, Zhang et al. (2014)’s STD (arc-standard) model works slightly better than Hatori et al. (2012)’s fully joint model in terms of the dependency score. Zhang et al. (2014)’s STD model is similar to our SegTag+Dep because they combine a word segmentator and a dependency parser using “deque” of words. 3.2.4 Effect of Character String Embeddings Finally, we compare the two pipeline models of SegTag+Dep to show the effectiveness of using character string representations instead of “UNK” embeddings. We use two dependency models with greedy training: Dep(g) for dependency model and Dep(g)-cs for dependency model without the character string embeddings . In the Dep(g)-cs model, we use the “UNK” embedding when the embeddings of the input features are unavailable, whereas we use the character string embeddings in model Dep(g). The results are presented in Table 8. When the models encounter unknown tokens, using the embeddings of character strings is better than using the “UNK” embedding. 3.2.5 Effect of Features across the Buffer and Stack We test the effect of special features: q0bX in Table 2. The q0bX features capture the tokens across the buffer and stack. Joint transition-based parsing models by Hatori et al. (2012) and Chen and Manning (2014) decide POS tags of words before corresponding word segmentations are determined. In our model, the q0bX features capture words even if their segmentations are not determined. We examine the effectiveness of these features by training greedy full joint models with and without them. The results are shown in Table 9. The q0bX features boost not only POS tagging scores but also word segmentation scores. 3.2.6 CTB-7 Experiments We also test the SegTagDep and SegTag+Dep models on CTB-7. In these experiments, we noModel Seg POS Dep SegTagDep(g) -q0bX 97.81 93.79 79.16 SegTagDep(g) 98.24 94.49 80.15 Table 9: SegTagDep model with and without (-q0bX) features across the buffer and stack. We compare these models with greedy training (g). Model Seg POS Dep Hatori+12 95.42 90.62 73.58 M. Zhang+14 STD 95.53 90.75 75.63 SegTagDep(g) 96.06 90.28 73.98 SegTagDep 95.86 90.91‡ 74.04 SegTag+Dep 96.23‡ 91.25‡ 75.28‡ Table 10: Results from SegTag+Dep and SegTagDep applied to the CTB-7 corpus. (g) denotes greedy trained models. ‡ denotes that the improvement is statistically siginificant at p < 0.01 compared with SegTagDep(g) using paired t-test. tice that the MLP with four hidden layers performs better than the MLP with three hidden layers, but we could not find definite differences in the experiments in CTB-5. We speculate that this is caused by the difference in the training set size. We present the final results with four hidden layers in Table 10. 3.2.7 Bi-LSTM Model We experiment the n-gram bi-LSTMs models with four and eight features listed in Table 3. We summarize the result in Table 11. The greedy biLSTM models perform slightly worse than the previous models, but they do not rely on feature engineering. 4 Related Work Zhang and Clark (2008) propose an incremental joint word segmentation and POS tagging model driven by a single perceptron. Zhang and Clark (2010) improve this model by using both character and word-based decoding. Hatori et al. (2011) propose a transition-based joint POS tagging and dependency parsing model. Zhang et al. (2013) propose a joint model using character structures of words for constituency parsing. Wang et al. (2013) also propose a lattice-based joint model for constituency parsing. Zhang et al. (2015) propose joint segmentation, POS tagging and dependency re-ranking system. This system requires 1211 Model Seg POS Dep Hatori+12 97.75 94.33 81.56 M. Zhang+14 EAG 97.76 94.36 81.70 SegTagDep (g) 98.24 94.49 80.15 Bi-LSTM 4feat.(g) 97.72 93.12 79.03 Bi-LSTM 8feat.(g) 97.70 93.37 79.38 Table 11: Bi-LSTM feature extraction model. “4feat.” and “8feat.” denote the use of four and eight features. base parsers. In neural joint models, Zheng et al. (2013) propose a neural network-based Chinese word segmentation model based on tag inferences. They extend their models for joint segmentation and POS tagging. Zhu et al. (2015) propose the re-ranking system of parsing results with recursive convolutional neural network. 5 Conclusion We propose the joint parsing models by the feedforward and bi-LSTM neural networks. Both of them use the character string embeddings. The character string embeddings help to capture the similarities of incomplete tokens. We also explore the neural network with few features using n-gram bi-LSTMs. Our SegTagDep joint model achieves better scores of Chinese word segmentation and POS tagging than previous joint models, and our SegTag and Dep pipeline model achieves state-of-the-art score of dependency parsing. The bi-LSTM models reduce the cost of feature engineering. References Chris Alberti, David Weiss, Greg Coppola, and Slav Petrov. 2015. Improved transition-based parsing and tagging with neural networks. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1354–1359. http://aclweb.org/anthology/D15-1159. Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 2442–2452. http://www.aclweb.org/anthology/P16-1231. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473. http://arxiv.org/abs/1409.0473. Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2015. Improved transition-based parsing by modeling characters instead of words with lstms. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 349–359. http://aclweb.org/anthology/D15-1041. Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, pages 740–750. http://www.aclweb.org/anthology/D14-1082. James Cross and Liang Huang. 2016. Incremental parsing with minimal features using bi-directional lstm. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Berlin, Germany, pages 32–37. http://anthology.aclweb.org/P16-2006. John Duchi, Elad Hazan, and Yoram Singer. 2010. Adaptive subgradient methods for online learning and stochastic optimization. UCB/EECS-2010-24. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, pages 334–343. http://www.aclweb.org/anthology/P15-1033. Jun Hatori, Takuya Matsuzaki, Yusuke Miyao, and Jun’ichi Tsujii. 2011. Incremental joint pos tagging and dependency parsing in chinese. In Proceedings of 5th International Joint Conference on Natural Language Processing. Asian Federation of Natural Language Processing, pages 1216–1224. http://www.aclweb.org/anthology/I11-1136. Jun Hatori, Takuya Matsuzaki, Yusuke Miyao, and Jun’ichi Tsujii. 2012. Incremental joint approach to word segmentation, pos tagging, and dependency parsing in chinese. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 1045–1053. http://www.aclweb.org/anthology/P12-1110. Geoffrey E. Hinton, J. L. McClelland, and D. E. Rumelhart. 1986. Learning distributed representations of concepts. In Proceedings of the eighth annual conference of the cognitive science society. pages Vol.1, p.12. 1212 Wenbin Jiang, Liang Huang, Qun Liu, and Yajuan L¨u. 2008. A cascaded linear model for joint chinese word segmentation and part-of-speech tagging. In Proceedings of ACL-08: HLT. Association for Computational Linguistics, pages 897–904. D. P. Kingma and J. Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional lstm feature representations. Transactions of the Association for Computational Linguistics 4:313–327. https://transacl.org/ojs/index.php/tacl/article/view/885. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. volume abs/1301.3781. http://arxiv.org/abs/1301.3781. Vinod Nair and Geoffrey E. Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), June 21-24, 2010, Haifa, Israel. pages 807–814. Joakim Nivre. 2004a. Incrementality in deterministic dependency parsing. In Frank Keller, Stephen Clark, Matthew Crocker, and Mark Steedman, editors, Proceedings of the ACL Workshop Incremental Parsing: Bringing Engineering and Cognition Together. Association for Computational Linguistics, pages 50–57. Yiou Wang, Jun’ichi Kazama, Yoshimasa Tsuruoka, Wenliang Chen, Yujie Zhang, and Kentaro Torisawa. 2011. Improving chinese word segmentation and pos tagging with semi-supervised methods using large auto-analyzed data. In Proceedings of 5th International Joint Conference on Natural Language Processing. Asian Federation of Natural Language Processing, Chiang Mai, Thailand, pages 309–317. http://www.aclweb.org/anthology/I11-1035. Zhiguo Wang, Chengqing Zong, and Nianwen Xue. 2013. A lattice-based framework for joint chinese word segmentation, pos tagging and parsing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Sofia, Bulgaria, pages 623–627. http://www.aclweb.org/anthology/P13-2110. David Weiss, Chris Alberti, Michael Collins, and Slav Petrov. 2015. Structured training for neural network transition-based parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, pages 323–333. http://www.aclweb.org/anthology/P15-1032. Meishan Zhang, Yue Zhang, Wanxiang Che, and Ting Liu. 2013. Chinese parsing exploiting characters. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Sofia, Bulgaria, pages 125–134. http://www.aclweb.org/anthology/P13-1013. Meishan Zhang, Yue Zhang, Wanxiang Che, and Ting Liu. 2014. Character-level chinese dependency parsing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 1326–1336. http://www.aclweb.org/anthology/P14-1125. Yuan Zhang, Chengtao Li, Regina Barzilay, and Kareem Darwish. 2015. Randomized greedy inference for joint segmentation, pos tagging and dependency parsing. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligenc. Association for Computational Linguistics, pages 42–52. http://www.aclweb.org/anthology/N151005. Yue Zhang and Stephen Clark. 2008. A tale of two parsers: Investigating and combining graph-based and transition-based dependency parsing. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 562–571. Yue Zhang and Stephen Clark. 2010. A fast decoder for joint word segmentation and POS-tagging using a single discriminative model. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 843–852. http://www.aclweb.org/anthology/D10-1082. Xiaoqing Zheng, Hanyang Chen, and Tianyu Xu. 2013. Deep learning for Chinese word segmentation and POS tagging. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 647–657. http://www.aclweb.org/anthology/D13-1061. Xiaoqing Zheng, Haoyuan Peng, Yi Chen, Pengjing Zhang, and Zhang Wenqiang. 2015. Characterbased parsing with convolutional neural network. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. page 153. Hao Zhou, Yue Zhang, Shujian Huang, and Jiajun Chen. 2015. A neural probabilistic structuredprediction model for transition-based dependency parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics 1213 and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, pages 1213–1222. http://www.aclweb.org/anthology/P151117. Chenxi Zhu, Xipeng Qiu, Xinchi Chen, and Xuanjing Huang. 2015. A re-ranking model for dependency parser with recursive convolutional neural network. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, pages 1159– 1168. http://www.aclweb.org/anthology/P15-1112. 1214
2017
111
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1215–1226 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1112 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1215–1226 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1112 Robust Incremental Neural Semantic Graph Parsing Jan Buys1 and Phil Blunsom1,2 1Department of Computer Science, University of Oxford 2DeepMind {jan.buys,phil.blunsom}@cs.ox.ac.uk Abstract Parsing sentences to linguisticallyexpressive semantic representations is a key goal of Natural Language Processing. Yet statistical parsing has focussed almost exclusively on bilexical dependencies or domain-specific logical forms. We propose a neural encoder-decoder transition-based parser which is the first full-coverage semantic graph parser for Minimal Recursion Semantics (MRS). The model architecture uses stack-based embedding features, predicting graphs jointly with unlexicalized predicates and their token alignments. Our parser is more accurate than attention-based baselines on MRS, and on an additional Abstract Meaning Representation (AMR) benchmark, and GPU batch processing makes it an order of magnitude faster than a high-precision grammar-based parser. Further, the 86.69% Smatch score of our MRS parser is higher than the upper-bound on AMR parsing, making MRS an attractive choice as a semantic representation. 1 Introduction An important goal of Natural Language Understanding (NLU) is to parse sentences to structured, interpretable meaning representations that can be used for query execution, inference and reasoning. Recently end-to-end models have outperformed traditional pipeline approaches, predicting syntactic or semantic structure as intermediate steps, on NLU tasks such as sentiment analysis and semantic relatedness (Le and Mikolov, 2014; Kiros et al., 2015), question answering (Hermann et al., 2015) and textual entailment (Rockt¨aschel et al., 2015). However the linguistic structure used in applications has predominantly been shallow, restricted to bilexical dependencies or trees. In this paper we focus on robust parsing into linguistically deep representations. The main representation that we use is Minimal Recursion Semantics (MRS) (Copestake et al., 1995, 2005), which serves as the semantic representation of the English Resource Grammar (ERG) (Flickinger, 2000). Existing parsers for full MRS (as opposed to bilexical semantic graphs derived from, but simplifying MRS) are grammar-based, performing disambiguation with a maximum entropy model (Toutanova et al., 2005; Zhang et al., 2007); this approach has high precision but incomplete coverage. Our main contribution is to develop a fast and robust parser for full MRS-based semantic graphs. We exploit the power of global conditioning enabled by deep learning to predict linguistically deep graphs incrementally. The model does not have access to the underlying ERG or syntactic structures from which the MRS analyses were originally derived. We develop parsers for two graph-based conversions of MRS, Elementary Dependency Structure (EDS) (Oepen and Lønning, 2006) and Dependency MRS (DMRS) (Copestake, 2009), of which the latter is inter-convertible with MRS. Abstract Meaning Representation (AMR) (Banarescu et al., 2013) is a graph-based semantic representation that shares the goals of MRS. Aside from differences in the choice of which linguistic phenomena are annotated, MRS is a compositional representation explicitly coupled with the syntactic structure of the sentence, while AMR does not assume compositionality or alignment with the sentence structure. Recently a number of AMR parsers have been developed (Flanigan et al., 2014; Wang et al., 2015b; Artzi et al., 2015; 1215 Damonte et al., 2017), but corpora are still under active development and low inter-annotator agreement places on upper bound of 83% F1 on expected parser performance (Banarescu et al., 2013). We apply our model to AMR parsing by introducing structure that is present explicitly in MRS but not in AMR (Buys and Blunsom, 2017). Parsers based on RNNs have achieved state-ofthe-art performance for dependency parsing (Dyer et al., 2015; Kiperwasser and Goldberg, 2016) and constituency parsing (Vinyals et al., 2015b; Dyer et al., 2016; Cross and Huang, 2016b). One of the main reasons for the prevalence of bilexical dependencies and tree-based representations is that they can be parsed with efficient and wellunderstood algorithms. However, one of the key advantages of deep learning is the ability to make predictions conditioned on unbounded contexts encoded with RNNs; this enables us to predict more complex structures without increasing algorithmic complexity. In this paper we show how to perform linguistically deep parsing with RNNs. Our parser is based on a transition system for semantic graphs. However, instead of generating arcs over an ordered, fixed set of nodes (the words in the sentence), we generate the nodes and their alignments jointly with the transition actions. We use a graph-based variant of the arc-eager transition-system. The sentence is encoded with a bidirectional RNN. The transition sequence, seen as a graph linearization, can be predicted with any encoder-decoder model, but we show that using hard attention, predicting the alignments with a pointer network and conditioning explicitly on stack-based features improves performance. In order to deal with data sparsity candidate lemmas are predicted as a pre-processing step, so that the RNN decoder predicts unlexicalized node labels. We evaluate our parser on DMRS, EDS and AMR graphs. We show that our model architecture improves performance from 79.68% to 84.16% F1 over an attention-based encoderdecoder baseline. Although our parser is less accurate that a high-precision grammar-based parser on a test set of sentences parsable by that grammar, incremental prediction and GPU batch processing enables it to parse 529 tokens per second, against 7 tokens per second for the grammarbased parser. On AMR parsing our model obtains 60.11% Smatch, an improvement of 8% over an existing neural AMR parser. Figure 1: Semantic representation of the sentence “Everybody wants to meet John.” The graph is based on the Elementary Dependency Structure (EDS) representation of Minimal Recursion Semantics (MRS). The alignments are given together with the corresponding tokens, and lemmas of surface predicates and constants. 2 Meaning Representations We define a common framework for semantic graphs in which we can place both MRSbased graph representations (DMRS and EDS) and AMR. Sentence meaning is represented with rooted, labelled, connected, directed graphs (Kuhlmann and Oepen, 2016). An example graph is visualized in Figure 1. representations. Node labels are referred to as predicates (concepts in AMR) and edge labels as arguments (AMR relations). In addition constants, a special type of node modifiers, are used to denote the string values of named entities and numbers (including date and time expressions). Every node is aligned to a token or a continuous span of tokens in the sentence the graph corresponds to. Minimal Recursion Semantics (MRS) is a framework for computational semantics that can be used for parsing or generation (Copestake et al., 2005). Instances and eventualities are represented with logical variables. Predicates take arguments with labels from a small, fixed set of roles. Arguments are either logical variables or handles, designated formalism-internal variables. Handle equality constraints support scope underspecification; multiple scope-resolved logical representations can be derived from one MRS structure. A predicate corresponds to its intrinsic argument 1216 and is aligned to a character span of the (untokenized) input sentence. Predicates representing named entities or numbers are parameterized by strings. Quantification is expressed through predicates that bound instance variables, rather than through logical operators such as ∃or ∀. MRS was designed to be integrated with feature-based grammars such as Head-driven Phrase Structure Grammar (HPSG) (Pollard and Sag, 1994) or Lexical Functional Grammar (LFG) (Kaplan and Bresnan, 1982). MRS has been implement the English Resource Grammar (ERG) (Flickinger, 2000), a broad-coverage high-precision HPSG grammar. Oepen and Lønning (2006) proposed Elementary Dependency Structure (EDS), a conversion of MRS to variable-free dependency graphs which drops scope underspecification. Copestake (2009) extended this conversion to avoid information loss, primarily through richer edge labels. The resulting representation, Dependency MRS (DMRS), can be converted back to the original MRS, or used directly in MRS-based applications (Copestake et al., 2016). We are interested in the empirical performance of parsers for both of these representations: while EDS is more interpretable as an independent semantic graph representation, DMRS can be related back to underspecified logical forms. A bilexical simplification of EDS has previously been used for semantic dependency parsing (Oepen et al., 2014, 2015). Figure 1 illustrates an EDS graph. MRS makes an explicit distinction between surface and abstract predicates (by convention surface predicates are prefixed by an underscore). Surface predicates consist of a lemma followed by a coarse part-of-speech tag and an optional sense label. Predicates absent from the ERG lexicon are represented by their surface forms and POS tags. We convert the character-level predicate spans given by MRS to token-level spans for parsing purposes, but the representation does not require gold tokenization. Surface predicates usually align with the span of the token(s) they represent, while abstract predicates can span longer segments. In full MRS every predicate is annotated with a set of morphosyntactic features, encoding for example tense, aspect and number information; we do not currently model these features. AMR (Banarescu et al., 2013) graphs can be represented in the same framework, despite a number of linguistic differences with MRS. Some in:root( <2> _v_1 :ARG1( <1> person :BV-of( <1> every_q ) ) :ARG2 <4> _v_1 :ARG1*( <1> person :ARG2( <5> named_CARG :BV-of ( <5> proper_q ) ) ) Figure 2: A top-down linearization of the EDS graph in Figure 1, using unlexicalized predicates. formation annotated explicitly in MRS is latent in AMR, including alignments and the distinction between surface (lexical) and abstract concepts. AMR predicates are based on PropBank (Palmer et al., 2005), annotated as lemmas plus sense labels, but they form only a subset of concepts. Other concepts are either English words or special keywords, corresponding to overt lexemes in some cases but not others. 3 Incremental Graph Parsing We parse sentences to their meaning representations by incrementally predicting semantic graphs together with their alignments. Let e = e1, e2, . . . , eI be a tokenized English sentence, t = t1, t2, . . . , tJ a sequential representation of its graph derivation and a = a1, a2, . . . , aJ an alignment sequence consisting of integers in the range 1, . . . , I. We model the conditional distribution p(t, a|e) which decomposes as J Y j=1 p(aj|(a, t)1:j−1, e)p(tj|a1:j, t1:j−1, e). We also predict the end-of-span alignments as a seperate sequence a(e). 3.1 Top-down linearization We now consider how to linearize the semantic graphs, before defining the neural models to parameterize the parser in section 4. The first approach is to linearize a graph as the pre-order traversal of its spanning tree, starting at a designated root node (see Figure 2). Variants of this approach have been proposed for neural constituency parsing (Vinyals et al., 2015b), logical form prediction (Dong and Lapata, 2016; Jia and Liang, 2016) and AMR parsing (Barzdins and Gosko, 2016; Peng et al., 2017). In the linearization, labels of edges whose direction are reversed in the spanning tree are marked 1217 by adding -of. Edges not included in the spanning tree, referred to as reentrancies, are represented with special edges whose dependents are dummy nodes pointing back to the original nodes. Our potentially lossy representation represents these edges by repeating the dependent node labels and alignments, which are recovered heuristically. The alignment does not influence the linearized node ordering. 3.2 Transition-based parsing Figure 1 shows that the semantic graphs we work with can also be interpreted as dependency graphs, as nodes are aligned to sentence tokens. Transition-based parsing (Nivre, 2008) has been used extensively to predict dependency graphs incrementally. We apply a variant of the arc-eager transition system that has been proposed for graph (as opposed to tree) parsing (Sagae and Tsujii, 2008; Titov et al., 2009; G´omez-Rodr´ıguez and Nivre, 2010) to derive a transition-based parser for deep semantic graphs. In dependency parsing the sentence tokens also act as nodes in the graph, but here we need to generate the nodes incrementally as the transition-system proceeds, conditioning the generation on the given sentence. Damonte et al. (2017) proposed an arc-eager AMR parser, but their transition system is more narrowly restricted to AMR graphs. The transition system consists of a stack of graph nodes being processed and a buffer, holding a single node at a time. The main transition actions are shift, reduce, left-arc, right-arc. Figure 3 shows an example transition sequence together with the stack and buffer after each step. The shift transition moves the element on the buffer to the top of the stack, and generates a predicate and its alignment as the next node on the buffer. Left-arc and right-arc actions add labeled arcs between the buffer and stack top (for DMRS a transition for undirected arcs is included), but do not change the state of the stack or buffer. Finally, reduce pops the top element from the stack, and predicts its end-ofspan alignment (if included in the representation). To predict non-planar arcs, we add another transition, which we call cross-arc, which first predicts the stack index of a node which is not on top of the stack, adding an arc between the head of the buffer and that node. Another special transition designates the buffer node as the root. To derive an oracle for this transition system, it is necessary to determine the order in which the nodes are generated. We consider two approaches. The first ordering is obtained by performing an in-order traversal of the spanning tree, where the node order is determined by the alignment. In the resulting linearization the only non-planar arcs are reentrancies. The second approach lets the ordering be monotone (non-decreasing) with respect to the alignments, while respecting the in-order ordering for nodes with the same alignment. In an arc-eager oracle arcs are added greedily, while a reduce action can either be performed as soon as the stack top node has been connected to all its dependents, or delayed until it has to reduce to allow the correct parse tree to be formed. In our model the oracle delays reduce, where possible, until the end alignment of the stack top node spans the node on the buffer. As the span end alignments often cover phrases that they head (e.g. for quantifiers) this gives a natural interpretation to predicting the span end together with the reduce action. 3.3 Delexicalization and lemma prediction Each token in MRS annotations is aligned to at most one surface predicate. We decompose surface predicate prediction by predicting candidate lemmas for input tokens, and delexicalized predicates consisting only of sense labels. The full surface predicates are then recovered through the predicted alignments. We extract a dictionary mapping words to lemmas from the ERG lexicon. Candidate lemmas are predicted using this dictionary, and where no dictionary entry is available with a lemmatizer. The same approach is applied to predict constants, along with additional normalizations such as mapping numbers to digit strings. We use the Stanford CoreNLP toolkit (Manning et al., 2014) to tokenize and lemmatize sentences, and tag tokens with the Stanford Named Entity Recognizer (Finkel et al., 2005). The tokenization is customized to correspond closely to the ERG tokenization; hyphens are removed pre-processing step. For AMR we use automatic alignments and the graph topology to classify concepts as surface or abstract (Buys and Blunsom, 2017). The lexicon is restricted to Propbank (Palmer et al., 2005) predicates; for other concepts we extract a lexicon from the training data. 1218 Action Stack Buffer Arc added init(1, person) [ ] (1, 1, person) sh(1, every q) [(1, 1, person)] (2, 1, every q) la(BV) [(1, 1, person)] (2, 1, every q) (2, BV, 1) sh(2, v 1) [(1, 1, person), (2, 1, every q)] (2, 1, v 1) re [(1, 1, person)] (3, 2, v 1) la(ARG1) [(1, 1, person)] (3, 2, v 1) (3, ARG1, 1) Figure 3: Start of the transition sequence for parsing the graph in Figure 1. The transitions are shift (sh), reduce (re), left arc (la) and right arc (ra). The action taken at each step is given, along with the state of the stack and buffer after the action is applied, and any arcs added. Shift transitions generate the alignments and predicates of the nodes placed on the buffer. Items on the stack and buffer have the form (node index, alignment, predicate label), and arcs are of the form (head index, argument label, dependent index). 4 Encoder-Decoder Models 4.1 Sentence encoder The sentence e is encoded with a bidirectional RNN. We use a standard LSTM architecture without peephole connections (Jozefowicz et al., 2015). For every token e we embed its word, POS tag and named entity (NE) tag as vectors xw, xt and xn, respectively. The embeddings are concatenated and passed through a linear transformation g(e) = W (x)[xw; xt; xn] + bx, such that g(e) has the same dimension as the LSTM. Each input position i is represented by a hidden state hi, which is the concatenation of its forward and backward LSTM state vectors. 4.2 Hard attention decoder We model the alignment of graph nodes to sentence tokens, a, as a random variable. For the arceager model, aj corresponds to the alignment of the node of the buffer after action tj is executed. The distribution of tj is over all transitions and predicates (corresponding to shift transitions), predicted with a single softmax. The parser output is predicted by an RNN decoder. Let sj be the decoder hidden state at output position j. We initialize s0 with the final state of the backward encoder. The alignment is predicted with a pointer network (Vinyals et al., 2015a). The logits are computed with an MLP scoring the decoder hidden state against each of the encoder hidden states (for i = 1, . . . , I), ui j = wT tanh(W (1)hi + W (2)sj). The alignment distribution is then estimated by p(aj = i|a1:j−1, t1:j−1, e) = softmax(ui j). To predict the next transition ti, the output vector is conditioned on the encoder state vector haj, corresponding to the alignment: oj = W (3)sj + W (4)haj vj = R(d)oj + b(d), where R(d) and b(d) are the output representation matrix and bias vector, respectively. The transition distribution is then given by p(tj|a1:j, t1:j−1, e) = softmax(vj). Let e(t) be the embedding of decoder symbol t. The RNN state at the next time-step is computed as dj+1 = W (5)e(tj) + W (6)haj sj+1 = RNN(dj+1, sj). The end-of-span alignment a(e) j for MRS-based graphs is predicted with another pointer network. The end alignment of a token is predicted only when a node is reduced from the stack, therefore this alignment is not observed at each time-step; it is also not fed back into the model. The hard attention approach, based on supervised alignments, can be contrasted to soft attention, which learns to attend over the input without supervision. The attention is computed as with hard attention, as αi j = softmax(ui j). However instead of making a hard selection, a weighted average over the encoder vectors is computed as qj = Pi=I i=1 αi jhi. This vector is used instead of haj for prediction and feeding to the next timestep. 1219 4.3 Stack-based model We extend the hard attention model to include features based on the transition system stack. These features are embeddings from the bidirectional RNN encoder, corresponding to the alignments of the nodes on the buffer and on top of the stack. This approach is similar to the features proposed by Kiperwasser and Goldberg (2016) and Cross and Huang (2016a) for dependency parsing, although they do not use RNN decoders. To implement these features the layer that computes the output vector is extended to oj = W (3)sj + W (4)haj + W (7)hst0, where st0 is the sentence alignment index of the element on top of the stack. The input layer to the next RNN time-step is similarly extended to dj+1 = W (5)e(tj) + W (6)hbuf + W (8)hst0, where buf is the buffer alignment after tj is executed. Our implementation of the stack-based model enables batch processing in static computation graphs, similar to Bowman et al. (2016). We maintain a stack of alignment indexes for each element in the batch, which is updated inside the computation graph after each parsing action. This enables minibatch SGD during training as well as efficient batch decoding. We perform greedy decoding. For the stackbased model we ensure that if the stack is empty, the next transition predicted has to be shift. For the other models we ensure that the output is wellformed during post-processing by robustly skipping over out-of-place symbols or inserting missing ones. 5 Related Work Prior work for MRS parsing predominantly predicts structures in the context of grammar-based parsing, where sentences are parsed to HPSG derivations consistent with the grammar, in this case the ERG (Flickinger, 2000). The nodes in the derivation trees are feature structures, from which MRS is extracted through unification. This approach fails to parse sentences for which no valid derivation is found. Maximum entropy models are used to score the derivations in order to find the most likely parse (Toutanova et al., 2005). This approach is implemented in the PET (Callmeier, 2000) and ACE1 parsers. There have also been some efforts to develop robust MRS parsers. One proposed approach learns a PCFG grammar to approximate the HPSG derivations (Zhang and Krieger, 2011; Zhang et al., 2014). MRS is then extracted with robust unification to compose potentially incompatible feature structures, although that still fails for a small proportion of sentences. The model is trained on a large corpus of Wikipedia text parsed with the grammar-based parser. Ytrestøl (2012) proposed a transition-based approach to HPSG parsing that produces derivations from which both syntactic and semantic (MRS) parses can be extracted. The parser has an option not to be restricted by the ERG. However, neither of these approaches have results available that can be compared directly to our setup, or generally available implementations. Although AMR parsers produce graphs that are similar in structure to MRS-based graphs, most of them make assumptions that are invalid for MRS, and rely on extensive external AMR-specific resources. Flanigan et al. (2014) proposed a twostage parser that first predicts concepts or subgraphs corresponding to sentence segments, and then parses these concepts into a graph structure. However MRS has a large proportion of abstract nodes that cannot be predicted from short segments, and interact closely with the graph structure. Wang et al. (2015b,a) proposed a custom transition-system for AMR parsing that converts dependency trees to AMR graphs, relying on assumptions on the relationship between these. Pust et al. (2015) proposed a parser based on syntaxbased machine translation (MT), while AMR has also been integrated into CCG Semantic Parsing (Artzi et al., 2015; Misra and Artzi, 2016). Recently Damonte et al. (2017) and Peng et al. (2017) proposed AMR parsers based on neural networks. 6 Experiments 6.1 Data DeepBank (Flickinger et al., 2012) is an HPSG and MRS annotation of the Penn Treebank Wall Street Journal (WSJ) corpus. It was developed following an approach known as dynamic treebanking (Oepen et al., 2004) that couples treebank annotation with grammar development, in this case 1http://sweaglesw.org/linguistics/ace/ 1220 of the ERG. This approach has been shown to lead to high inter-annotator agreement: 0.94 against 0.71 for AMR (Bender et al., 2015). Parses are only provided for sentences for which the ERG has an analysis acceptable to the annotator – this means that we cannot evaluate parsing accuracy for sentences which the ERG cannot parse (approximately 15% of the original corpus). We use Deepbank version 1.1, corresponding to ERG 12142, following the suggested split of sections 0 to 19 as training data data, 20 for development and 21 for testing. The gold-annotated training data consists of 35,315 sentences. We use the LOGON environment3 and the pyDelphin library4 to extract DMRS and EDS graphs. For AMR parsing we use LDC2015E86, the dataset released for the SemEval 2016 AMR parsing Shared Task (May, 2016). This data includes newswire, weblog and discussion forum text. The training set has 16,144 sentences. We obtain alignments using the rule-based JAMR aligner (Flanigan et al., 2014). 6.2 Evaluation Dridan and Oepen (2011) proposed an evaluation metric called Elementary Dependency Matching (EDM) for MRS-based graphs. EDM computes the F1-score of tuples of predicates and arguments. A predicate tuple consists of the label and character span of a predicate, while an argument tuple consists of the character spans of the head and dependent nodes of the relation, together with the argument label. In order to tolerate subtle tokenization differences with respect to punctuation, we allow span pairs whose ends differ by one character to be matched. The Smatch metric (Cai and Knight, 2013), proposed for evaluating AMR graphs, also measures graph overlap, but does not rely on sentence alignments to determine the correspondences between graph nodes. Smatch is instead computed by performing inference over graph alignments to estimate the maximum F1-score obtainable from a one-to-one matching between the predicted and gold graph nodes. 2http://svn.delph-in.net/erg/tags/ 1214/ 3http://moin.delph-in.net/LogonTop 4https://github.com/delph-in/pydelphin Model EDM EDMP EDMA TD lex 81.44 85.20 76.87 TD unlex 81.72 85.59 77.04 AE lex 81.35 85.79 76.02 AE unlex 82.56 86.76 77.54 Table 1: DMRS development set results for attention-based encoder-decoder models with alignments encoded in the linearization, for topdown (TD) and arc-eager (AE) linearizations, and lexicalized and unlexicalized predicate prediction. 6.3 Model setup Our parser5 is implemented in TensorFlow (Abadi et al., 2015). For training we use Adam (Kingma and Ba, 2015) with learning rate 0.01 and batchsize 64. Gradients norms are clipped to 5.0 (Pascanu et al., 2013). We use single-layer LSTMs with dropout of 0.3 (tuned on the development set) on input and output connections. We use encoder and decoder embeddings of size 256, and POS and NE tag embeddings of size 32, For DMRS and EDS graphs the hidden units size is set to 256, for AMR it is 128. This configuration, found using grid search and heuristic search within the range of models that fit into a single GPU, gave the best performance on the development set under multiple graph linearizations. Encoder word embeddings are initialized (in the first 100 dimensions) with pre-trained order-sensitive embeddings (Ling et al., 2015). Singletons in the encoder input are replaced with an unknown word symbol with probability 0.5 for each iteration. 6.4 MRS parsing results We compare different linearizations and model architectures for parsing DMRS on the development data, showing that our approach is more accurate than baseline neural approaches. We report EDM scores, including scores for predicate (EDMP ) and argument (EDMA) prediction. First we report results using standard attentionbased encoder-decoders, with the alignments encoded as token strings in the linearization. (Table 1). We compare the top-down (TD) and arceager (AE) linearizations, as well as the effect of delexicalizing the predicates (factorizing lemmas out of the linearization and predicting them sepa5Code and data preparation scripts are available at https://github.com/janmbuys/ DeepDeepParser. 1221 Model EDM EDMP EDMA TD soft 81.53 85.32 76.94 TD hard 82.75 86.37 78.37 AE hard 84.65 87.77 80.85 AE stack 85.28 88.38 81.51 Table 2: DMRS development set results of encoder-decoder models with pointer-based alignment prediction, delexicalized predicates and hard or soft attention. rately.) In both cases constants are predicted with a dictionary lookup based on the predicted spans. A special label is predicted for predicates not in the ERG lexicon – the words and POS tags that make up those predicates are recovered through the alignments during post-processing. The arc-eager unlexicalized representation gives the best performance, even though the model has to learn to model the transition system stack through the recurrent hidden states without any supervision of the transition semantics. The unlexicalized models are more accurate, mostly due to their ability to generalize to sparse or unseen predicates occurring in the lexicon. For the arc-eager representation, the oracle EDM is 99% for the lexicalized representation and 98.06% for the delexicalized representation. The remaining errors are mostly due to discrepancies between the tokenization used by our system and the ERG tokenization. The unlexicalized models are also faster to train, as the decoder’s output vocabulary is much smaller, reducing the expense of computing softmaxes over large vocabularies. Next we consider models with delexicalized linearizations that predict the alignments with pointer networks, contrasting soft and hard attention models (Table 2). The results show that the arc-eager models performs better than those based on topdown representation. For the arc-eager model we use hard attention, due to the natural interpretation of the alignment prediction corresponding to the transition system. The stack-based architecture gives further improvements. When comparing the effect of different predicate orderings for the arc-eager model, we find that the monotone ordering performs 0.44 EDM better than the in-order ordering, despite having to parse more non-planar dependencies. We also trained models that only predict predicates (in monotone order) together with their Model TD RNN AE RNN ACE EDM 79.68 84.16 89.64 EDMP 83.36 87.54 92.08 EDMA 75.16 80.10 86.77 Start EDM 84.44 87.81 91.91 Start EDMA 80.93 85.61 89.28 Smatch 85.28 86.69 93.50 Table 3: DMRS parsing test set results, comparing the standard top-down attention-based and arceager stack-based RNN models to the grammarbased ACE parser. start spans. The hard attention model obtains 91.36% F1 on predicates together with their start spans with the unlexicalized model, compared to 88.22% for lexicalized predicates and 91.65% for the full parsing model. Table 3 reports test set results for various evaluation metrics. Start EDM is calculated by requiring only the start of the alignment spans to match, not the ends. We compare the performance of our baseline and stack-based models against ACE, the ERG-based parser. Despite the promising performance of the model a gap remains between the accuracy of our parser and ACE. One reason for this is that the test set sentences will arguably be easier for ACE to parse as their choice was restricted by the same grammar that ACE uses. EDM metrics excluding end-span prediction (Start EDM) show that our parser has relatively more difficulty in parsing end-span predictions than the grammar-based parser. We also evaluate the speed of our model compared with ACE. For the unbatched version of our model, the stack-based parser parses 41.63 tokens per second, while the batched implementation parses 529.42 tokens per second using a batch size of 128. In comparison, the setting of ACE for which we report accuracies parses 7.47 tokens per second. By restricting the memory usage of ACE, which restricts its coverage, we see that ACE can parse 11.07 tokens per second at 87.7% coverage, and 15.11 tokens per second at 77.8% coverage. Finally we report results for parsing EDS (Table 4). The EDS parsing task is slightly simpler than DMRS, due to the absence of rich argument labels and additional graph edges that allow the recovery of full MRS. We see that for ACE the accuracies are very similar, while for our model EDS 1222 Model AE RNN ACE EDM 85.48 89.58 EDMP 88.14 91.82 EDMA 82.20 86.92 Smatch 86.50 93.52 Table 4: EDS parsing test set results. Model Concept F1 Smatch TD no pointers 70.16 57.95 TD soft 71.25 59.39 TD soft unlex 72.62 59.88 AE hard unlex 76.83 59.83 AE stack unlex 77.93 61.21 Table 5: Development set results for AMR parsing. All the models except the first predict alignments with pointer networks. parsing is more accurate on the EDM metrics. We hypothesize that most of the extra information in DMRS can be obtained through the ERG, to which ACE has access but our model doesn’t. An EDS corpus which consists of about 95% of the DeepBank data has also been released6, with the goal of enabling comparison with other semantic graph parsing formalisms, including CCG dependencies and Prague Semantic Dependencies, on the same data set (Kuhlmann and Oepen, 2016). On this corpus our model obtains 85.87 EDM and 85.49 Smatch. 6.5 AMR parsing We apply the same approach to AMR parsing. Results on the development set are given in Table 5. The arc-eager-based models again give better performance, mainly due to improved concept prediction accuracy. However, concept prediction remains the most important weakness of the model; Damonte et al. (2017) reports that state-of-the-art AMR parsers score 83% on concept prediction. We report test set results in Table 6. Our best neural model outperforms the baseline JAMR parser (Flanigan et al., 2014), but still lags behind the performance of state-of-the-art AMR parsers such as CAMR (Wang et al., 2016) and AMR Eager (Damonte et al., 2017). These models make extensive use of external resources, including syntactic parsers and semantic role labellers. Our attention-based encoder-decoder model already outperforms previous sequence-to-sequence 6http://sdp.delph-in.net/osdp-12.tgz Model Smatch Flanigan et al. (2014) 56 Wang et al. (2016) 66.54 Damonte et al. (2017) 64 Peng and Gildea (2016) 55 Peng et al. (2017) 52 Barzdins and Gosko (2016) 43.3 TD no pointers 56.56 AE stack delex 60.11 Table 6: AMR parsing test set results (Smatch F1 scores). Published results follow the number of decimals which were reported. AMR parsers (Barzdins and Gosko, 2016; Peng et al., 2017), and the arc-eager model boosts accuracy further. Our model also outperforms a Synchronous Hyperedge Replacement Grammar model (Peng and Gildea, 2016) which is comparable as it does not make extensive use of external resources. 7 Conclusion In this paper we advance the state of parsing by employing deep learning techniques to parse sentence to linguistically expressive semantic representations that have not previously been parsed in an end-to-end fashion. We presented a robust, wide-coverage parser for MRS that is faster than existing parsers and amenable to batch processing. We believe that there are many future avenues to explore to further increase the accuracy of such parsers, including different training objectives, more structured architectures and semisupervised learning. Acknowledgments The first author thanks the financial support of the Clarendon Fund and the Skye Foundation. We thank Stephan Oepen for feedback and help with data preperation, and members of the Oxford NLP group for valuable discussions. References Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man´e, Rajat Monga, Sherry Moore, 1223 Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi´egas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015. TensorFlow: Large-scale machine learning on heterogeneous systems. Software available from tensorflow.org. http://tensorflow.org/. Yoav Artzi, Kenton Lee, and Luke Zettlemoyer. 2015. Broad-coverage CCG semantic parsing with AMR. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 1699–1710. http://aclweb.org/anthology/D15-1198. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse. Association for Computational Linguistics, Sofia, Bulgaria, pages 178–186. http://www.aclweb.org/anthology/W13-2322. Guntis Barzdins and Didzis Gosko. 2016. Riga at semeval-2016 task 8: Impact of smatch extensions and character-level neural translation on AMR parsing accuracy. In Proceedings of SemEval. Emily M Bender, Dan Flickinger, Stephan Oepen, Woodley Packard, and Ann Copestake. 2015. Layers of interpretation: On grammar and compositionality. In Proceedings of the 11th International Conference on Computational Semantics. pages 239– 249. Samuel R. Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D. Manning, and Christopher Potts. 2016. A fast unified model for parsing and sentence understanding. In Proceedings of ACL. pages 1466–1477. http://www.aclweb.org/anthology/P16-1139. Jan Buys and Phil Blunsom. 2017. Oxford at SemEval2017 Task 9: Neural AMR parsing with pointeraugmented attention. In Proceedings of SemEval. Shu Cai and Kevin Knight. 2013. Smatch: An evaluation metric for semantic feature structures. In Proceedings of ACL (short papers). Ulrich Callmeier. 2000. PET - a platform for experimentation with efficient HPSG processing techniques. Natural Language Engineering 6(1):99– 107. Ann Copestake. 2009. Invited talk: Slacker semantics: Why superficiality, dependency and avoidance of commitment can be the right way to go. In Proceedings of EACL. pages 1–9. http://www.aclweb.org/anthology/E09-1001. Ann Copestake, Guy Emerson, Michael Wayne Goodman, Matic Horvat, Alexander Kuhnle, and Ewa Muszyska. 2016. Resources for building applications with dependency minimal recursion semantics. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016). Ann Copestake, Dan Flickinger, Rob Malouf, Susanne Riehemann, and Ivan Sag. 1995. Translation using minimal recursion semantics. In In Proceedings of the Sixth International Conference on Theoretical and Methodological Issues in Machine Translation. Ann Copestake, Dan Flickinger, Carl Pollard, and Ivan A Sag. 2005. Minimal recursion semantics: An introduction. Research on Language and Computation 3(2-3):281–332. James Cross and Liang Huang. 2016a. Incremental parsing with minimal features using bi-directional lstm. In Proceedings of ACL. page 32. James Cross and Liang Huang. 2016b. Spanbased constituency parsing with a structurelabel system and provably optimal dynamic oracles. In Proceedings of EMNLP. pages 1–11. https://aclweb.org/anthology/D16-1001. Marco Damonte, Shay B. Cohen, and Giorgio Satta. 2017. An incremental parser for abstract meaning representation. In Proceedings of EACL. pages 536– 546. http://www.aclweb.org/anthology/E17-1051. Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In Proceedings of ACL. pages 33–43. http://www.aclweb.org/anthology/P16-1004. Rebecca Dridan and Stephan Oepen. 2011. Parser evaluation using elementary dependency matching. In Proceedings of the 12th International Conference on Parsing Technologies. Association for Computational Linguistics, pages 225–230. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of ACL. pages 334– 343. http://www.aclweb.org/anthology/P15-1033. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proceedings of NAACL. Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by Gibbs sampling. In Proceedings of ACL. pages 363–370. http://dx.doi.org/10.3115/1219840.1219885. Jeffrey Flanigan, Sam Thomson, Jaime G. Carbonell, Chris Dyer, and Noah A. Smith. 2014. A discriminative graph-based parser for the abstract meaning representation. In Proceedings of ACL. pages 1426– 1436. http://aclweb.org/anthology/P/P14/P141134.pdf. 1224 Dan Flickinger. 2000. On building a more effcient grammar by exploiting types. Natural Language Engineering 6(01):15–28. Dan Flickinger, Yi Zhang, and Valia Kordoni. 2012. Deepbank. a dynamically annotated treebank of the wall street journal. In Proceedings of the 11th International Workshop on Treebanks and Linguistic Theories. pages 85–96. Carlos G´omez-Rodr´ıguez and Joakim Nivre. 2010. A transition-based parser for 2-planar dependency structures. In Proceedings of ACL. pages 1492– 1501. http://www.aclweb.org/anthology/P10-1151. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems. pages 1693– 1701. Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of ACL. pages 12–22. http://www.aclweb.org/anthology/P16-1002. Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. 2015. An empirical exploration of recurrent network architectures. In Proceedings of ICML. pages 2342–2350. Ronald M Kaplan and Joan Bresnan. 1982. Lexicalfunctional grammar: A formal system for grammatical representation. Formal Issues in LexicalFunctional Grammar pages 29–130. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of ICLR. http://arxiv.org/abs/1412.6980. Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional lstm feature representations. Transactions of the Association for Computational Linguistics 4:313–327. Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in neural information processing systems. pages 3294–3302. Marco Kuhlmann and Stephan Oepen. 2016. Towards a catalogue of linguistic graph banks. Computational Linguistics 42(4):819–827. Quoc V Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In ICML. volume 14, pages 1188–1196. Wang Ling, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015. Two/too simple adaptations of word2vec for syntax problems. In Proceedings of NAACL-HLT. pages 1299–1304. http://www.aclweb.org/anthology/N15-1142. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In ACL System Demonstrations. pages 55–60. http://www.aclweb.org/anthology/P/P14/P14-5010. Jonathan May. 2016. Semeval-2016 task 8: Meaning representation parsing. In Proceedings of SemEval. pages 1063–1073. http://www.aclweb.org/anthology/S16-1166. Dipendra Kumar Misra and Yoav Artzi. 2016. Neural shift-reduce ccg semantic parsing. In Proceedings of EMNLP. Austin, Texas, pages 1775–1786. https://aclweb.org/anthology/D16-1183. Joakim Nivre. 2008. Algorithms for deterministic incremental dependency parsing. Computational Linguistics 34(4):513–553. Stephan Oepen, Dan Flickinger, Kristina Toutanova, and Christopher D. Manning. 2004. Lingo redwoods. Research on Language and Computation 2(4):575–596. https://doi.org/10.1007/s11168-0047430-4. Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Silvie Cinkova, Dan Flickinger, Jan Hajic, and Zdenka Uresova. 2015. Semeval 2015 task 18: Broad-coverage semantic dependency parsing. In Proceedings of SemEval. pages 915–926. http://www.aclweb.org/anthology/S15-2153. Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Dan Flickinger, Jan Hajic, Angelina Ivanova, and Yi Zhang. 2014. Semeval 2014 task 8: Broad-coverage semantic dependency parsing. In Proceedings of SemEval. pages 63–72. http://www.aclweb.org/anthology/S14-2008. Stephan Oepen and Jan Tore Lønning. 2006. Discriminant-based MRS banking. In Proceedings of the 5th International Conference on Language Resources and Evaluation. pages 1250–1255. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational linguistics 31(1):71– 106. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. ICML (3) 28:1310–1318. Xiaochang Peng and Daniel Gildea. 2016. Uofr at semeval-2016 task 8: Learning synchronous hyperedge replacement grammar for amr parsing. In Proceedings of SemEval-2016. pages 1185–1189. http://www.aclweb.org/anthology/S16-1183. Xiaochang Peng, Chuan Wang, Daniel Gildea, and Nianwen Xue. 2017. Addressing the data sparsity issue in neural amr parsing. In Proceedings of EACL. Preprint. http://www.cs.brandeis.edu/ cwang24/files/eacl17.pdf. 1225 Carl Pollard and Ivan A Sag. 1994. Head-driven phrase structure grammar. University of Chicago Press. Michael Pust, Ulf Hermjakob, Kevin Knight, Daniel Marcu, and Jonathan May. 2015. Parsing English into abstract meaning representation using syntax-based machine translation. In Proceedings of EMNLP. Association for Computational Linguistics, Lisbon, Portugal, pages 1143–1154. http://aclweb.org/anthology/D15-1136. Tim Rockt¨aschel, Edward Grefenstette, Karl Moritz Hermann, Tom´aˇs Koˇcisk`y, and Phil Blunsom. 2015. Reasoning about entailment with neural attention. arXiv preprint arXiv:1509.06664 . Kenji Sagae and Jun’ichi Tsujii. 2008. Shiftreduce dependency DAG parsing. In Proceedings of Coling 2008. pages 753–760. http://www.aclweb.org/anthology/C08-1095. Ivan Titov, James Henderson, Paola Merlo, and Gabriele Musillo. 2009. Online graph planarisation for synchronous parsing of semantic and syntactic dependencies. In IJCAI. pages 1562–1567. Kristina Toutanova, Christopher D. Manning, Dan Flickinger, and Stephan Oepen. 2005. Stochastic HPSG parse disambiguation using the redwoods corpus. Research on Language and Computation 3(1):83–105. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015a. Pointer networks. In Advances in Neural Information Processing Systems 28. pages 2692– 2700. http://papers.nips.cc/paper/5866-pointernetworks.pdf. Oriol Vinyals, Łukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015b. Grammar as a foreign language. In Advances in Neural Information Processing Systems. pages 2755–2763. Chuan Wang, Sameer Pradhan, Xiaoman Pan, Heng Ji, and Nianwen Xue. 2016. Camr at semeval2016 task 8: An extended transition-based amr parser. In Proceedings of SemEval. pages 1173– 1178. http://www.aclweb.org/anthology/S16-1181. Chuan Wang, Nianwen Xue, and Sameer Pradhan. 2015a. Boosting transition-based AMR parsing with refined actions and auxiliary analyzers. In Proceedings of ACL (2). pages 857–862. http://www.aclweb.org/anthology/P15-2141.pdf. Chuan Wang, Nianwen Xue, and Sameer Pradhan. 2015b. A transition-based algorithm for AMR parsing. In Proceedings of NAACL 2015. pages 366–375. http://aclweb.org/anthology/N/N15/N151040.pdf. Gisle Ytrestøl. 2012. Transition-Based Parsing for Large-Scale Head-Driven Phrase Structure Grammars. Ph.D. thesis, University of Oslo. Yi Zhang and Hans-Ulrich Krieger. 2011. Large-scale corpus-driven PCFG approximation of an HPSG. In Proceedings of the 12th international conference on parsing technologies. Association for Computational Linguistics, pages 198–208. Yi Zhang, Stephan Oepen, and John Carroll. 2007. Efficiency in unification-based n-best parsing. In Proceedings of IWPT. pages 48–59. http://www.aclweb.org/anthology/W/W07/W072207. Yi Zhang, Stephan Oepen, Rebecca Dridan, Dan Flickinger, and Hans-Ulrich Krieger. 2014. Robust parsing, meaning composition, and evaluation: Integrating grammar approximation, default unification, and elementary semantic dependencies. Unpublished manuscript. 1226
2017
112
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1227–1236 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1113 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1227–1236 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1113 Joint Extraction of Entities and Relations Based on a Novel Tagging Scheme Suncong Zheng, Feng Wang, Hongyun Bao, Yuexing Hao,Peng Zhou, Bo Xu Institute of Automation, Chinese Academy of Sciences, 100190, Beijing, P.R. China {suncong.zheng, feng.wang,hongyun.bao, haoyuexing2014, peng.zhou,xubo}@ia.ac.cn Abstract Joint extraction of entities and relations is an important task in information extraction. To tackle this problem, we firstly propose a novel tagging scheme that can convert the joint extraction task to a tagging problem. Then, based on our tagging scheme, we study different end-toend models to extract entities and their relations directly, without identifying entities and relations separately. We conduct experiments on a public dataset produced by distant supervision method and the experimental results show that the tagging based methods are better than most of the existing pipelined and joint learning methods. What’s more, the end-to-end model proposed in this paper, achieves the best results on the public dataset. 1 Introduction Joint extraction of entities and relations is to detect entity mentions and recognize their semantic relations simultaneously from unstructured text, as Figure 1 shows. Different from open information extraction (Open IE) (Banko et al., 2007) whose relation words are extracted from the given sentence, in this task, relation words are extracted from a predefined relation set which may not appear in the given sentence. It is an important issue in knowledge extraction and automatic construction of knowledge base. Traditional methods handle this task in a pipelined manner, i.e., extracting the entities (Nadeau and Sekine, 2007) first and then recognizing their relations (Rink, 2010). This separated framework makes the task easy to deal with, and each component can be more flexible. But it neglects the relevance between these two sub-tasks The [United States]E-loc President [Trump]E-per will visit the [Apple Inc]E-Org . Country-President None None Extracted Results {United States, Country-President, Trump} Figure 1: A standard example sentence for the task. “Country-President” is a relation in the predefined relation set. and each subtask is an independent model. The results of entity recognition may affect the performance of relation classification and lead to erroneous delivery (Li and Ji, 2014). Different from the pipelined methods, joint learning framework is to extract entities together with relations using a single model. It can effectively integrate the information of entities and relations, and it has been shown to achieve better results in this task. However, most existing joint methods are feature-based structured systems (Li and Ji, 2014; Miwa and Sasaki, 2014; Yu and Lam, 2010; Ren et al., 2017). They need complicated feature engineering and heavily rely on the other NLP toolkits, which might also lead to error propagation. In order to reduce the manual work in feature extraction, recently, (Miwa and Bansal, 2016) presents a neural networkbased method for the end-to-end entities and relations extraction. Although the joint models can represent both entities and relations with shared parameters in a single model, they also extract the entities and relations separately and produce redundant information. For instance, the sentence in Figure 1 contains three entities: “United States”, “Trump” and “Apple Inc”. But only “United States” and “Trump” hold a fix relation “CountryPresident”. Entity “Apple Inc” has no obvious relationship with the other entities in this sen1227 tence. Hence, the extracted result from this sentence is {United Statese1, Country-Presidentr, Trumpe2}, which called triplet here. In this paper, we focus on the extraction of triplets that are composed of two entities and one relation between these two entities. Therefore, we can model the triplets directly, rather than extracting the entities and relations separately. Based on the motivations, we propose a tagging scheme accompanied with the end-to-end model to settle this problem. We design a kind of novel tags which contain the information of entities and the relationships they hold. Based on this tagging scheme, the joint extraction of entities and relations can be transformed into a tagging problem. In this way, we can also easily use neural networks to model the task without complicated feature engineering. Recently, end-to-end models based on LSTM (Hochreiter and Schmidhuber, 1997) have been successfully applied to various tagging tasks: Named Entity Recognition (Lample et al., 2016), CCG Supertagging (Vaswani et al., 2016), Chunking (Zhai et al., 2017) et al. LSTM is capable of learning long-term dependencies, which is beneficial to sequence modeling tasks. Therefore, based on our tagging scheme, we investigate different kinds of LSTM-based end-to-end models to jointly extract the entities and relations. We also modify the decoding method by adding a biased loss to make it more suitable for our special tags. The method we proposed is a supervised learning algorithm. In reality, however, the process of manually labeling a training set with a large number of entity and relation is too expensive and error-prone. Therefore, we conduct experiments on a public dataset1 which is produced by distant supervision method (Ren et al., 2017) to validate our approach. The experimental results show that our tagging scheme is effective in this task. In addition, our end-to-end model can achieve the best results on the public dataset. The major contributions of this paper are: (1) A novel tagging scheme is proposed to jointly extract entities and relations, which can easily transform the extraction problem into a tagging task. (2) Based on our tagging scheme, we study different kinds of end-to-end models to settle the problem. The tagging-based methods are better than most of the existing pipelined and joint learning methods. (3) Furthermore, we also develop an end-to1https://github.com/shanzhenren/CoType end model with biased loss function to suit for the novel tags. It can enhance the association between related entities. 2 Related Works Entities and relations extraction is an important step to construct a knowledge base, which can be benefit for many NLP tasks. Two main frameworks have been widely used to solve the problem of extracting entity and their relationships. One is the pipelined method and the other is the joint learning method. The pipelined method treats this task as two separated tasks, i.e., named entity recognition (NER) (Nadeau and Sekine, 2007) and relation classification (RC) (Rink, 2010). Classical NER models are linear statistical models, such as Hidden Markov Models (HMM) and Conditional Random Fields (CRF) (Passos et al., 2014; Luo et al., 2015). Recently, several neural network architectures (Chiu and Nichols, 2015; Huang et al., 2015; Lample et al., 2016) have been successfully applied to NER, which is regarded as a sequential token tagging task. Existing methods for relation classification can also be divided into handcrafted feature based methods (Rink, 2010; Kambhatla, 2004) and neural network based methods (Xu, 2015a; Zheng et al., 2016; Zeng, 2014; Xu, 2015b; dos Santos, 2015). While joint models extract entities and relations using a single model. Most of the joint methods are feature-based structured systems (Ren et al., 2017; Yang and Cardie, 2013; Singh et al., 2013; Miwa and Sasaki, 2014; Li and Ji, 2014). Recently, (Miwa and Bansal, 2016) uses a LSTM-based model to extract entities and relations, which can reduce the manual work. Different from the above methods, the method proposed in this paper is based on a special tagging manner, so that we can easily use end-toend model to extract results without NER and RC. end-to-end method is to map the input sentence into meaningful vectors and then back to produce a sequence. It is widely used in machine translation (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014) and sequence tagging tasks (Lample et al., 2016; Vaswani et al., 2016). Most methods apply bidirectional LSTM to encode the input sentences, but the decoding methods are always different. For examples, (Lample et al., 2016) use a CRF layers to decode the tag sequence, while 1228 Input Sentence: The United States President Trump will visit the Apple Inc founded by Steven Paul Jobs {Apple Inc, Company-Founder, Steven Paul Jobs} Final Results: Tags: O B-CP-1 E-CP-1 O S-CP-2 O O O B-CF-1 E-CF-1 O O B-CF-2 I-CF-2 E-CF-2 {United States, Country-President, Trump} Figure 2: Gold standard annotation for an example sentence based on our tagging scheme, where “CP” is short for “Country-President” and “CF” is short for “Company-Founder”. (Vaswani et al., 2016; Katiyar and Cardie, 2016) apply LSTM layer to produce the tag sequence. 3 Method We propose a novel tagging scheme and an end-toend model with biased objective function to jointly extract entities and their relations. In this section, we firstly introduce how to change the extraction problem to a tagging problem based on our tagging method. Then we detail the model we used to extract results. 3.1 The Tagging Scheme Figure 2 is an example of how the results are tagged. Each word is assigned a label that contributes to extract the results. Tag “O” represents the “Other” tag, which means that the corresponding word is independent of the extracted results. In addition to “O”, the other tags consist of three parts: the word position in the entity, the relation type, and the relation role. We use the “BIES” (Begin, Inside, End,Single) signs to represent the position information of a word in the entity. The relation type information is obtained from a predefined set of relations and the relation role information is represented by the numbers “1” and “2”. An extracted result is represented by a triplet: (Entity1, RelationType, Entity2). “1” means that the word belongs to the first entity in the triplet, while “2” belongs to second entity that behind the relation type. Thus, the total number of tags is Nt = 2 ∗4 ∗|R| + 1, where |R| is the size of the predefined relation set. Figure 2 is an example illustrating our tagging method. The input sentence contains two triplets: {United States, Country-President, Trump} and {Apple Inc, Company-Founder, Steven Paul Jobs}, where “Country-President” and “Company-Founder” are the predefined relation types. The words “United”,“States”,“ Trump”,“Apple”,“Inc” ,“Steven”, “Paul” and “Jobs” are all related to the final extracted results. Thus they are tagged based on our special tags. For example, the word of “United” is the first word of entity “United States” and is related to the relation “Country-President”, so its tag is “B-CP-1”. The other entity “ Trump”, which is corresponding to “United States”, is labeled as “S-CP-2”. Besides, the other words irrelevant to the final result are labeled as “O”. 3.2 From Tag Sequence To Extracted Results From the tag sequence in Figure 2, we know that “ Trump” and “United States” share the same relation type “Country-President”, “Apple Inc” and “Steven Paul Jobs” share the same relation type “Company-Founder”. We combine entities with the same relation type into a triplet to get the final result. Accordingly, “ Trump” and “United States” can be combined into a triplet whose relation type is “Country-President”. Because, the relation role of “ Trump” is “2” and “United States” is “1”, the final result is {United States, CountryPresident, Trump}. The same applies to {Apple Inc, Company-Founder, Steven Paul Jobs}. Besides, if a sentence contains two or more triplets with the same relation type, we combine every two entities into a triplet based on the nearest principle. For example, if the relation type “Country-President” in Figure 2 is “CompanyFounder”, then there will be four entities in the given sentence with the same relation type. “United States” is closest to entity “ Trump” and the “Apple Inc” is closest to “Jobs”, so the results will be {United States, Company-Founder, Trump} and {Apple Inc, Company-Founder, Steven Paul Jobs}. In this paper, we only consider the situation where an entity belongs to a triplet, and we leave identification of overlapping relations for future work. 1229 The United States president O B-CP-1 E-CP-1 O S-CP-2 W1 Bi-LSTM h1 LSTMd T1 W2 Bi-LSTM h2 LSTMd T2 W3 Bi-LSTM h3 LSTMd T3 W4 Bi-LSTM h4 LSTMd T4 W5 Bi-LSTM h5 LSTMd T5 Trump tanh σ σ σ X + X tanh X Wt ht-1 ht ct-1 ct tanh σ σ σ X + X tanh X Wt ht-1 ht ct-1 ct Tt Tt-1 tanh (a) Bi-LSTM Block (b) LSTMd Block Input Sentence Embeding Layer Encoding Layer Decoding Layer Softmax Output O B-CP-1 E-CP-1 O S-CP-2 W1 Bi-LSTM h1 LSTMd T1 W2 Bi-LSTM h2 LSTMd T2 W3 Bi-LSTM h3 LSTMd T3 W4 Bi-LSTM h4 LSTMd T4 W5 Bi-LSTM h5 LSTMd T5 Input Sentence Embedding Layer Encoding Layer Decoding Layer Softmax Output The United States president Trump ... (a) The End-to-End Model tanh σ σ σ X + X tanh X ht ct-1 ct (b) Bi-LSTM Block Wt ht-1 tanh σ σ σ X + X tanh X ht ht-1 ht ct-1 ct Tt Tt-1 tanh (c) LSTMd Block 2 2 2 2 Figure 3: An illustration of our model. (a): The architecture of the end-to-end model, (b): The LSTM memory block in Bi-LSTM encoding layer, (c): The LSTM memory block in LSTMd decoding layer. 3.3 The End-to-end Model In recent years, end-to-end model based on neural network is been widely used in sequence tagging task. In this paper, we investigate an end-to-end model to produce the tags sequence as Figure 3 shows. It contains a bi-directional Long Short Term Memory (Bi-LSTM) layer to encode the input sentence and a LSTM-based decoding layer with biased loss. The biased loss can enhance the relevance of entity tags. The Bi-LSTM Encoding Layer. In sequence tagging problems, the Bi-LSTM encoding layer has been shown the effectiveness to capture the semantic information of each word. It contains forward lstm layer, backward lstm layer and the concatenate layer. The word embedding layer converts the word with 1-hot representation to an embedding vector. Hence, a sequence of words can be represented as W = {w1, ...wt, wt+1...wn}, where wt ∈Rd is the d-dimensional word vector corresponding to the t-th word in the sentence and n is the length of the given sentence. After word embedding layer, there are two parallel LSTM layers: forward LSTM layer and backward LSTM layer. The LSTM architecture consists of a set of recurrently connected subnets, known as memory blocks. Each time-step is a LSTM memory block. The LSTM memory block in Bi-LSTM encoding layer is used to compute current hidden vector ht based on the previous hidden vector ht−1, the previous cell vector ct−1 and the current input word embedding wt. Its structure diagram is shown in Figure 3 (b), and detail operations are defined as follows: it = δ(Wwiwt + Whiht−1 + Wcict−1 + bi), (1) ft = δ(Wwfwt+Whfht−1+Wcfct−1+bf), (2) zt = tanh(Wwcwt + Whcht−1 + bc), (3) ct = ftct−1 + itzt, (4) ot = δ(Wwowt + Whoht−1 + Wcoct + bo), (5) ht = ottanh(ct), (6) where i, f and o are the input gate, forget gate and output gate respectively, b is the bias term, c is the cell memory, and W(.) are the parameters. For each word wt, the forward LSTM layer will encode wt by considering the contextual information from word w1 to wt, which is marked as −→ ht. In the similar way, the backward LSTM layer will encode wt based on the contextual information from wn to wt, which is marked as ←− ht. Finally, we concatenate ←− ht and −→ ht to represent word t’s encoding information, denoted as ht = [−→ ht, ←− ht]. The LSTM Decoding Layer. We also adopt a LSTM structure to produce the tag sequence. When detecting the tag of word wt, the inputs of decoding layer are: ht obtained from Bi-LSTM encoding layer, former predicted tag embedding Tt−1, former cell value c(2) t−1, and the former hidden vector in decoding layer h(2) t−1. The structure diagram of the memory block in LSTMd is shown in Figure 3 (c), and detail operations are defined as follows: i(2) t = δ(W (2) wi ht + W (2) hi h(2) t−1 + WtiTt−1 + b(2) i ), (7) 1230 f(2) t = δ(W (2) wf ht + W (2) hf h(2) t−1 + WtfTt−1 + b(2) f ), (8) z(2) t = tanh(W (2) wc ht+W (2) hc h(2) t−1+WtcTt−1+b(2) c ), (9) c(2) t = f(2) t c(2) t−1 + i(2) t z(2) t , (10) o(2) t = δ(W (2) wo ht + W (2) ho h(2) t−1 + W (2) co ct + b(2) o ), (11) h(2) t = o(2) t tanh(c(2) t ), (12) Tt = Wtsh(2) t + bts. (13) The final softmax layer computes normalized entity tag probabilities based on the tag predicted vector Tt: yt = WyTt + by, (14) pi t = exp(yi t) Nt P j=1 exp(yj t ) , (15) where Wy is the softmax matrix, Nt is the total number of tags. Because T is similar to tag embedding and LSTM is capable of learning longterm dependencies, the decoding manner can model tag interactions. The Bias Objective Function. We train our model to maximize the log-likelihood of the data and the optimization method we used is RMSprop proposed by Hinton in (Tieleman and Hinton, 2012). The objective function can be defined as: L =max |D| X j=1 Lj X t=1 (log(p(j) t = y(j) t |xj, Θ) · I(O) +α · log(p(j) t = y(j) t |xj, Θ) · (1 −I(O))), where |D| is the size of training set, Lj is the length of sentence xj, y(j) t is the label of word t in sentence xj and p(j) t is the normalized probabilities of tags which defined in Formula 15. Besides, I(O) is a switching function to distinguish the loss of tag ’O’ and relational tags that can indicate the results. It is defined as follows: I(O) = ( 1, if tag = ′O′ 0, if tag ̸= ′O′. α is the bias weight. The larger α is, the greater influence of relational tags on the model. 4 Experiments 4.1 Experimental setting Dataset To evaluate the performance of our methods, we use the public dataset NYT 2 which is produced by distant supervision method (Ren et al., 2017). A large amount of training data can be obtained by means of distant supervision methods without manually labeling. While the test set is manually labeled to ensure its quality. In total, the training data contains 353k triplets, and the test set contains 3, 880 triplets. Besides, the size of relation set is 24. Evaluation We adopt standard Precision (Prec), Recall (Rec) and F1 score to evaluate the results. Different from classical methods, our method can extract triplets without knowing the information of entity types. In other words, we did not use the label of entity types to train the model, therefore we do not need to consider the entity types in the evaluation. A triplet is regarded as correct when its relation type and the head offsets of two corresponding entities are both correct. Besides, the ground-truth relation mentions are given and “None” label is excluded as (Ren et al., 2017; Li and Ji, 2014; Miwa and Bansal, 2016) did. We create a validation set by randomly sampling 10% data from test set and use the remaining data as evaluation based on (Ren et al., 2017)’s suggestion. We run 10 times for each experiment then report the average results and their standard deviation as Table 1 shows. Hyperparameters Our model consists of a BiLSTM encoding layer and a LSTM decoding layer with bias objective function. The word embeddings used in the encoding part are initialed by running word2vec3 (Mikolov et al., 2013) on NYT training corpus. The dimension of the word embeddings is d = 300. We regularize our network using dropout on embedding layer and the dropout ratio is 0.5. The number of lstm units in encoding layer is 300 and the number in decoding layer is 600. The bias parameter α corresponding to the results in Table 1 is 10. 2The dataset can be downloaded at: https://github.com/shanzhenren/CoType. There are three data sets in the public resource and we only use the NYT dataset. Because more than 50% of the data in BioInfer has overlapping relations which is beyond the scope of this paper. As for dataset Wiki-KBP, the number of relation type in the test set is more than that of the train set, which is also not suitable for a supervised training method. Details of the data can be found in Ren’s(Ren et al., 2017) paper. 3https://code.google.com/archive/p/word2vec/ 1231 Methods Prec. Rec. F1 FCM 0.553 0.154 0.240 DS+logistic 0.258 0.393 0.311 LINE 0.335 0.329 0.332 MultiR 0.338 0.327 0.333 DS-Joint 0.574 0.256 0.354 CoType 0.423 0.511 0.463 LSTM-CRF 0.693 ± 0.008 0.310 ± 0.007 0.428 ± 0.008 LSTM-LSTM 0.682 ± 0.007 0.320 ± 0.006 0.436 ± 0.006 LSTM-LSTM-Bias 0.615 ± 0.008 0.414 ± 0.005 0.495 ± 0.006 Table 1: The predicted results of different methods on extracting both entities and their relations. The first part (from row 1 to row 3) is the pipelined methods and the second part (row 4 to 6) is the jointly extracting methods. Our tagging methods are shown in part three (row 7 to 9). In this part, we not only report the results of precision, recall and F1, we also compute their standard deviation. Baselines We compare our method with several classical triplet extraction methods, which can be divided into the following categories: the pipelined methods, the jointly extracting methods and the end-to-end methods based our tagging scheme. For the pipelined methods, we follow (Ren et al., 2017)’s settings: The NER results are obtained by CoType (Ren et al., 2017) then several classical relation classification methods are applied to detect the relations. These methods are: (1) DS-logistic (Mintz et al., 2009) is a distant supervised and feature based method, which combines the advantages of supervised IE and unsupervised IE features; (2) LINE (Tang et al., 2015) is a network embedding method, which is suitable for arbitrary types of information networks; (3) FCM (Gormley et al., 2015) is a compositional model that combines lexicalized linguistic context and word embeddings for relation extraction. The jointly extracting methods used in this paper are listed as follows: (4) DS-Joint (Li and Ji, 2014) is a supervised method, which jointly extracts entities and relations using structured perceptron on human-annotated dataset; (5) MultiR (Hoffmann et al., 2011) is a typical distant supervised method based on multi-instance learning algorithms to combat the noisy training data; (6) CoType (Ren et al., 2017) is a domain independent framework by jointly embedding entity mentions, relation mentions, text features and type labels into meaningful representations. In addition, we also compare our method with two classical end-to-end tagging models: LSTMCRF (Lample et al., 2016) and LSTM-LSTM (Vaswani et al., 2016). LSTM-CRF is proposed for entity recognition by using a bidirectional LSTM to encode input sentence and a conditional random fields to predict the entity tag sequence. Different from LSTM-CRF, LSTM-LSTM uses a LSTM layer to decode the tag sequence instead of CRF. They are used for the first time to jointly extract entities and relations based on our tagging scheme. 4.2 Experimental Results We report the results of different methods as shown in Table 1. It can be seen that our method, LSTM-LSTM-Bias, outperforms all other methods in F1 score and achieves a 3% improvement in F1 over the best method CoType (Ren et al., 2017). It shows the effectiveness of our proposed method. Furthermore, from Table 1, we also can see that the jointly extracting methods are better than pipelined methods, and the tagging methods are better than most of the jointly extracting methods. It also validates the validity of our tagging scheme for the task of jointly extracting entities and relations. When compared with the traditional methods, the precisions of the end-to-end models are significantly improved. But only LSTM-LSTM-Bias can be better to balance the precision and recall. The reason may be that these end-to-end models all use a Bi-LSTM encoding input sentence and different neural networks to decode the results. The methods based on neural networks can well fit the data. Therefore, they can learn the common features of the training set well and may lead to the lower expansibility. We also find that the LSTM-LSTM 1232 Elements E1 E2 (E1,E2) PRF Prec. Rec. F1 Prec. Rec. F1 Prec. Rec. F1 LSTM-CRF 0.596 0.325 0.420 0.605 0.325 0.423 0.724 0.341 0.465 LSTM-LSTM 0.593 0.342 0.434 0.619 0.334 0.434 0.705 0.340 0.458 LSTM-LSTM-Bias 0.590 0.479 0.529 0.597 0.451 0.514 0.645 0.437 0.520 Table 2: The predicted results of triplet’s elements based on our tagging scheme. model is better than LSTM-CRF model based on our tagging scheme. Because, LSTM is capable of learning long-term dependencies and CRF (Lafferty et al., 2001) is good at capturing the joint probability of the entire sequence of labels. The related tags may have a long distance from each other. Hence, LSTM decoding manner is a little better than CRF. LSTM-LSTM-Bias adds a bias weight to enhance the effect of entity tags and weaken the effect of invalid tag. Therefore, in this tagging scheme, our method can be better than the common LSTM-decoding methods. 5 Analysis and Discussion 5.1 Error Analysis In this paper, we focus on extracting triplets composed of two entities and a relation. Table 1 has shown the predict results of the task. It treats an triplet is correct only when the relation type and the head offsets of two corresponding entities are both correct. In order to find out the factors that affect the results of end-to-end models, we analyze the performance on predicting each element in the triplet as Table 2 shows. E1 and E2 represent the performance on predicting each entity, respectively. If the head offset of the first entity is correct, then the instance of E1 is correct, the same to E2. Regardless of relation type, if the head offsets of two corresponding entities are both correct, the instance of (E1, E2) is correct. As shown in Table 2, (E1, E2) has higher precision when compared with E1 and E2. But its recall result is lower than E1 and E2. It means that some of the predicted entities do not form a pair. They only obtain E1 and do not find its corresponding E2, or obtain E2 and do not find its corresponding E1. Thus it leads to the prediction of more single E and less (E1, E2) pairs. Therefore, entity pair (E1, E2) has higher precision and lower recall than single E. Besides, the predicted results of (E1, E2) in Table 2 have about 3% improvement when compared predicted results in Table 1, which means that 3% of the test data is predicted to be wrong because the relation type is predicted to be wrong. 5.2 Analysis of Biased Loss Different from LSTM-CRF and LSTM-LSTM, our approach is biased towards relational labels to enhance links between entities. In order to further analyze the effect of the bias objective function, we visualize the ratio of predicted single entities for each end-to-end method as Figure 4. The single entities refer to those who cannot find their corresponding entities. Figure 4 shows whether it is E1 or E2, our method can get a relatively low ratio on the single entities. It means that our method can effectively associate two entities when compared LSTM-CRF and LSTM-LSTM which pay little attention to the relational tags. Single E1 Single E2 0.00 0.05 0.10 0.15 0.20 0.25 The Ratio of Single E 0.178 0.186 0.151 0.167 0.135 0.101 LSTM-CRF LSTM-LSTM LSTM-LSTM-Bias Figure 4: The ratio of predicted single entities for each method. The higher of the ratio the more entities are left. Besides, we also change the Bias Parameter α from 1 to 20, and the predicted results are shown in Figure 5. If α is too large, it will affect the accuracy of prediction and if α is too small, the recall will decline. When α = 10, LSTM-LSTMBias can balance the precision and recall, and can achieve the best F1 scores. 1233 Standard S1 [Panama City Beach]E2contain has condos , but the area was one of only two in [Florida]E1contain where sales rose in March , compared with a year earlier. LSTM-LSTM Panama City Beach has condos , but the area was one of only two in [Florida]E1contain where sales rose in March , compared with a year earlier. LSTM-LSTM-Bias [Panama City Beach]E2contain has condos , but the area was one of only two in [Florida]E1contain where sales rose in March , compared with a year earlier. Standard S2 All came from [Nuremberg]E2contain , [Germany]E1contain , a center of brass production since the Middle Ages. LSTM-LSTM All came from Nuremberg , [Germany]E1contain , a center of brass production since the [Middle Ages]E2contain. LSTM-LSTM-Bias All came from Nuremberg , [Germany]E1contain , a center of brass production since the [Middle Ages]E2contain. Standard S3 [Stephen A.]E2CF , the co-founder of the [Blackstone Group]E1CF , which is in the process of going public , made $ 400 million last year. LSTM-LSTM [Stephen A.]E1CF , the co-founder of the [Blackstone Group]E1CF , which is in the process of going public , made $ 400 million last year. LSTM-LSTM-Bias [Stephen A.]E1CF , the co-founder of the [Blackstone Group]E2CF , which is in the process of going public , made $ 400 million last year. Table 3: Output from different models. Standard Si represents the gold standard of sentence i. The blue part is the correct result, and the red one is the wrong one. E1CF in case ’3’ is short for E1Company−Founder. 0 5 10 15 20 Bias Parameter 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Value of ( P,R,F) Precition Recall F1 Figure 5: The results predicted by LSTM-LSTMBias on different bias parameter α. 5.3 Case Study In this section, we observe the prediction results of end-to-end methods, and then select several representative examples to illustrate the advantages and disadvantages of the methods as Table 3 shows. Each example contains three row, the first row is the gold standard, the second and the third rows are the extracted results of model LSTM-LSTM and LSTM-LSTM-Bias respectively. S1 represents the situation that the distance between the two interrelated entities is far away from each other, which is more difficult to detect their relationships. When compared with LSTMLSTM, LSTM-LSTM-Bias uses a bias objective function which enhance the relevance between entities. Therefore, in this example, LSTM-LSTMBias can extract two related entities, while LSTMLSTM can only extract one entity of “Florida” and can not detect entity “Panama City Beach”. S2 is a negative example that shows these methods may mistakenly predict one of the entity. There are no indicative words between entities Nuremberg and Germany. Besides, the patten “a * of *” between Germany and MiddleAges may be easy to mislead the models that there exists a relation of “Contains” between them. The problem can be solved by adding some samples of this kind of expression patterns to the training data. S3 is a case that models can predict the entities’ head offset right, but the relational role is wrong. LSTM-LSTM treats both “Stephen A. Schwarzman” and “Blackstone Group” as entity E1, and can not find its corresponding E2. Although, LSTM-LSMT–Bias can find the entities pair (E1, E2), it reverses the roles of “Stephen A. Schwarzman” and “Blackstone Group”. It shows that LSTM-LSTM-Bias is able to better on pre1234 dicting entities pair, but it remains to be improved in distinguishing the relationship between the two entities. 6 Conclusion In this paper, we propose a novel tagging scheme and investigate the end-to-end models to jointly extract entities and relations. The experimental results show the effectiveness of our proposed method. But it still has shortcoming on the identification of the overlapping relations. In the future work, we will replace the softmax function in the output layer with multiple classifier, so that a word can has multiple tags. In this way, a word can appear in multiple triplet results, which can solve the problem of overlapping relations. Although, our model can enhance the effect of entity tags, the association between two corresponding entities still requires refinement in next works. Acknowledgments We thank Xiang Ren for dataset details and helpful discussions. This work is also supported by the National High Technology Research and Development Program of China (863 Program) (Grant No. 2015AA015402), the National Natural Science Foundation of China (No. 61602479) and the NSFC project 61501463. References Michele Banko, Michael J Cafarella, Stephen Soderland, Matthew Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In IJCAI. volume 7, pages 2670–2676. Jason PC Chiu and Eric Nichols. 2015. Named entity recognition with bidirectional lstm-cnns. In Processings of Transactions of the Association for Computational Linguistics. Cıcero Nogueira et al. dos Santos. 2015. Classifying relations by ranking with convolutional neural networks. In Proceedings of the 53th ACL international conference. volume 1, pages 626–634. Matthew R Gormley, Mo Yu, and Mark Dredze. 2015. Improved relation extraction with feature-rich compositional embedding models. In Proceedings of the EMNLP. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledgebased weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 541–550. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991 . Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In EMNLP. volume 3, page 413. Nanda Kambhatla. 2004. Combining lexical, syntactic, and semantic features with maximum entropy models for extracting relations. In Proceedings of the 43th ACL international conference. page 22. Arzoo Katiyar and Claire Cardie. 2016. Investigating lstms for joint extraction of opinion entities and relations. In Proceedings of the 54th ACL international conference. John Lafferty, Andrew McCallum, Fernando Pereira, et al. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the eighteenth international conference on machine learning, ICML. volume 1, pages 282–289. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the NAACL international conference. Qi Li and Heng Ji. 2014. Incremental joint extraction of entity mentions and relations. In Proceedings of the 52rd Annual Meeting of the Association for Computational Linguistics. pages 402–412. Gang Luo, Xiaojiang Huang, Chin-Yew Lin, and Zaiqing Nie. 2015. Joint entity recognition and disambiguation. In Conference on Empirical Methods in Natural Language Processing. pages 879–888. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. pages 3111–3119. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL. Association for Computational Linguistics, pages 1003–1011. Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using lstms on sequences and tree structures. In Proceedings of the 54rd Annual Meeting of the Association for Computational Linguistics. 1235 Makoto Miwa and Yutaka Sasaki. 2014. Modeling joint entity and relation extraction with table representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. pages 1858–1869. David Nadeau and Satoshi Sekine. 2007. A survey of named entity recognition and classification. Lingvisticae Investigationes 30(1):3–26. Alexandre Passos, Vineet Kumar, and Andrew McCallum. 2014. Lexicon infused phrase embeddings for named entity resolution. In International Conference on Computational Linguistics. pages 78–86. Xiang Ren, Zeqiu Wu, Wenqi He, Meng Qu, Clare R Voss, Heng Ji, Tarek F Abdelzaher, and Jiawei Han. 2017. Cotype: Joint extraction of typed entities and relations with knowledge bases. In Proceedings of the 26th WWW international conference. Bryan et al. Rink. 2010. Utd: Classifying semantic relations by combining lexical and semantic resources. In Proceedings of the 5th International Workshop on Semantic Evaluation. pages 256–259. Sameer Singh, Sebastian Riedel, Brian Martin, Jiaping Zheng, and Andrew McCallum. 2013. Joint inference of entities, relations, and coreference. In Proceedings of the 2013 workshop on Automated knowledge base construction. ACM, pages 1–6. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems. pages 3104–3112. Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. 2015. Line: Large-scale information network embedding. In Proceedings of the 24th International Conference on World Wide Web. ACM, pages 1067–1077. Tijmen Tieleman and Geoffrey Hinton. 2012. Lecture 6.5-rmsprop. In COURSERA: Neural networks for machine learning. Ashish Vaswani, Yonatan Bisk, Kenji Sagae, and Ryan Musa. 2016. Supertagging with lstms. In Proceedings of the NAACL international conference. pages 232–237. Kun et al. Xu. 2015a. Semantic relation classification via convolutional neural networks with simple negative sampling. In Proceedings of the EMNLP. Yan et al. Xu. 2015b. Classifying relations via long short term memory networks along shortest dependency paths. In Proceedings of EMNLP international conference. Bishan Yang and Claire Cardie. 2013. Joint inference for fine-grained opinion extraction. In Proceedings of the 51rd Annual Meeting of the Association for Computational Linguistics. pages 1640–1649. Xiaofeng Yu and Wai Lam. 2010. Jointly identifying entities and extracting relations in encyclopedia text via a graphical model approach. In Proceedings of the 21th COLING international conference. pages 1399–1407. Daojian et al. Zeng. 2014. Relation classification via convolutional deep neural network. In Proceedings of the 25th COLING international conference. pages 2335–2344. Feifei Zhai, Saloni Potdar, Bing Xiang, and Bowen Zhou. 2017. Neural models for sequence chunking. In Proceedings of the AAAI international conference. Suncong Zheng, Jiaming Xu, Peng Zhou, Hongyun Bao, Zhenyu Qi, and Bo Xu. 2016. A neural network framework for relation extraction: Learning entity semantic and relation pattern. KnowledgeBased Systems 114:12–23. 1236
2017
113
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1237–1247 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1114 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1237–1247 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1114 A Local Detection Approach for Named Entity Recognition and Mention Detection Mingbin Xu, Hui Jiang, Sedtawut Watcharawittayakul Department of Electrical Engineering and Computer Science Lassonde School of Engineering, York University 4700 Keele Street, Toronto, Ontario, Canada {xmb, hj, watchara}@eecs.yorku.ca Abstract In this paper, we study a novel approach for named entity recognition (NER) and mention detection (MD) in natural language processing. Instead of treating NER as a sequence labeling problem, we propose a new local detection approach, which relies on the recent fixed-size ordinally forgetting encoding (FOFE) method to fully encode each sentence fragment and its left/right contexts into a fixedsize representation. Subsequently, a simple feedforward neural network (FFNN) is learned to either reject or predict entity label for each individual text fragment. The proposed method has been evaluated in several popular NER and MD tasks, including CoNLL 2003 NER task and TAC-KBP2015 and TAC-KBP2016 Tri-lingual Entity Discovery and Linking (EDL) tasks. Our method has yielded pretty strong performance in all of these examined tasks. This local detection approach has shown many advantages over the traditional sequence labeling methods. 1 Introduction Natural language processing (NLP) plays an important role in artificial intelligence, which has been extensively studied for many decades. Conventional NLP techniques include the rule-based symbolic approaches widely used about two decades ago, and the more recent statistical approaches relying on feature engineering and statistical models. In the recent years, deep learning approach has achieved huge successes in many applications, ranging from speech recognition to image classification. It is drawing increasing attention in the NLP community. In this paper, we are interested in a fundamental problem in NLP, namely named entity recognition (NER) and mention detection (MD). NER and MD are very challenging tasks in NLP, laying the foundation of almost every NLP application. NER and MD are tasks of identifying entities (named and/or nominal) from raw text, and classifying the detected entities into one of the pre-defined categories such as person (PER), organization (ORG), location (LOC), etc. Some tasks focus on named entities only, while the others also detect nominal mentions. Moreover, nested mentions may need to be extracted too. For example, [Sue]P ER and her [brother]P ER N studied in [University of [Toronto]LOC]ORG. where Toronto is a LOC entity, embedded in another longer ORG entity University of Toronto. Similar to many other NLP problems, NER and MD is formulated as a sequence labeling problem, where a tag is sequentially assigned to each word in the input sentence. It has been extensively studied in the NLP community (Borthwick et al., 1998). The core problem is to model the conditional probability of an output sequence given an arbitrary input sequence. Many hand-crafted features are combined with statistical models, such as conditional random fields (CRFs) (Nguyen et al., 2010), to compute conditional probabilities. More recently, some popular neural networks, including convolutional neural networks (CNNs) and recurrent neural networks (RNNs), are proposed to solve sequence labelling problems. In the inference stage, the learned models compute the conditional probabilities and the output sequence is generated by the Viterbi decoding algorithm (Viterbi, 1967). In this paper, we propose a novel local detection approach for solving NER and MD problems. The idea can be easily extended to many other se1237 quence labeling problems, such as chunking, partof-speech tagging (POS). Instead of globally modeling the whole sequence in training and jointly decode the entire output sequence in test, our method examines all word segments (up to a certain length) in a sentence. A word segment will be examined individually based on the underlying segment itself and its left and right contexts in the sentence so as to determine whether this word segment is a valid named entity and the corresponding label if it is. This approach conforms to the way human resolves an NER problem. Given any word fragment and its contexts in a sentence or paragraph, people accurately determine whether this word segment is a named entity or not. People rarely conduct a global decoding over the entire sentence to make such a decision. The key to making an accurate local decision for each individual fragment is to have full access to the fragment itself as well as its complete contextual information. The main pitfall to implement this idea is that we can not easily encode the segment and its contexts in models since they are of varying lengths in natural languages. Many feature engineering techniques have been proposed but all of these methods will inevitably lead to information loss. In this work, we propose to use a recent fixed-size encoding method, namely fixed-size ordinally forgetting encoding (FOFE) (Zhang et al., 2015a,b), to solve this problem. The FOFE method is a simple recursive encoding method. FOFE theoretically guarantees (almost) unique and lossless encoding of any variable-length sequence. The left and the right contexts for each word segment are encoded by FOFE method, and then a simple neural network can be trained to make a precise recognition for each individual word segment based on the fixed-size presentation of the contextual information. This FOFE-based local detection approach is more appealing to NER and MD. Firstly, feature engineering is almost eliminated. Secondly, under this local detection framework, nested mention is handled with little modification. Next, it makes better use of partially-labeled data available from many application scenarios. Sequence labeling model requires all entities in a sentence to be labeled. If only some (not all) entities are labeled, it is not effective to learn a sequence labeling model. However, every single labeled entity, along with its contexts, may be used to learn the proposed model. At last, due to the simplicity of FOFE, simple neural networks, such as multilayer perceptrons, are sufficient for recognition. These models are much faster to train and easier to tune. In the test stage, all possible word segments from a sentence may be packed into a mini-batch, jointly recognized in parallel on GPUs. This leads to a very fast decoding process as well. In this paper, we have applied this FOFE-based local detection approach to several popular NER and MD tasks, including the CoNLL 2003 NER task and TAC-KBP2015 and TAC-KBP2016 Trilingual Entity Discovery and Linking (EDL) tasks. Our proposed method has yielded strong performance in all of these examined tasks. 2 Related Work It has been a long history of research involving neural networks (NN). In this section, we briefly review some recent NN-related research work in NLP, which may be relevant to our work. The success of word embedding (Mikolov et al., 2013; Liu et al., 2015) encourages researchers to focus on machine-learned representation instead of heavy feature engineering in NLP. Using word embedding as the typical feature representation for words, NNs become competitive to traditional approaches in NER. Many NLP tasks, such as NER, chunking and part-of-speech (POS) tagging can be formulated as sequence labeling tasks. In (Collobert et al., 2011), deep convolutional neural networks (CNN) and conditional random fields (CRF) are used to infer NER labels at a sentence level, where they still use many hand-crafted features to improve performance, such as capitalization features explicitly defined based on first-letter capital, non-initial capital and so on. Recently, recurrent neural networks (RNNs) have demonstrated the ability in modeling sequences (Graves, 2012). Huang et al. (2015) built on the previous CNN-CRF approach by replacing CNNs with bidirectional Long Short-Term Memory (B-LSTM). Though they have reported improved performance, they employ heavy feature engineering in that work, most of which is language-specific. There is a similar attempt in (Rondeau and Su, 2016) with full-rank CRF. CNNs are used to extract character-level features automatically in (dos Santos et al., 2015). Gazetteer is a list of names grouped by the predefined categories. Gazetteer is shown to be one of the most effective external knowledge sources 1238 to improve NER performance (Sang and Meulder, 2003). Thus, gazetteer is widely used in many NER systems. In (Chiu and Nichols, 2016), stateof-the-art performance on a popular NER task, i.e., CoNLL2003, is achieved by incorporating a large gazetteer. Different from previous ways to use a set of bits to indicate whether a word is in gazetteer or not, they have encoded a match in BIOES (Begin, Inside, Outside, End, Single) annotation, which captures positional information. Interestingly enough, none of these recent successes in NER was achieved by a vanilla RNN. Rather, these successes are often established by sophisticated models combining CNNs, LSTMs and CRFs in certain ways. In this paper, based on recent work in (Zhang et al., 2015a,b) and (Zhang et al., 2016), we propose a novel but simple solution to NER by applying DNN on top of FOFEbased features. This simpler approach can achieve performance very close to state-of-the-art on various NER and MD tasks, without using any external knowledge or feature engineering. 3 Preliminary In this section, we will briefly review some background techniques, which are important to our proposed NER and mention detection approach. 3.1 Deep Feedforward Neural Networks It is well known that neural network is a universal approximator under certain conditions (Hornik, 1991). A feedforward neural network (FFNN) is a weighted graph with a layered architecture. Each layer is composed of several nodes. Successive layers are fully connected. Each node applies a function on the weighted sum of the lower layer. An NN can learn by adjusting its weights in a process called back-propagation. The learned NN may be used to generalize and extrapolate to new inputs that have not been seen during training. 3.2 Fixed-size Ordinally Forgetting Encoding FFNN is a powerful computation model. However, it requires fixed-size inputs and lacks the ability of capturing long-term dependency. Because most NLP problems involves variablelength sequences of words, RNNs/LSTMs are more popular than FFNNs in dealing with these problems. The Fixed-size Ordinally Forgetting Encoding (FOFE), originally proposed in (Zhang et al., 2015a,b), nicely overcomes the limitations of FFNNs because it can uniquely and losslessly encode a variable-length sequence of words into a fixed-size representation. Give a vocabulary V , each word can be represented by a one-hot vector. FOFE mimics bag-ofwords (BOW) but incorporates a forgetting factor to capture positional information. It encodes any sequence of variable length composed by words in V . Let S = w1, w2, w3, ..., wT denote a sequence of T words from V , and et be the one-hot vector of the t-th word in S, where 1 ≤t ≤T. The FOFE of each partial sequence zt from the first word to the t-th word is recursively defined as: zt = ( 0, if t = 0 α · zt−1 + et, otherwise (1) where the constant α is called forgetting factor, and it is picked between 0 and 1 exclusively. Obviously, the size of zt is |V |, and it is irrelevant to the length of original sequence, T. Here’s an example. Assume that we have three words in our vocabulary, e.g. A, B, C, whose one-hot representations are [1, 0, 0], [0, 1, 0] and [0, 0, 1] respectively. When calculating from left to right, the FOFE for the sequence “ABC” is [α2, α, 1] and that of “ABCBC” is [α4, α+α3, 1+ α2]. The word sequences can be unequivocally recovered from their FOFE representations (Zhang et al., 2015a,b). The uniqueness of FOFE representation is theoretically guaranteed by the following two theorems: Theorem 1. If the forgetting factor α satisfies 0 < α ≤0.5, FOFE is unique for any countable vocabulary V and any finite value T. Theorem 2. For 0.5 < α < 1, given any finite value T and any countable vocabulary V , FOFE is almost unique everywhere, except only a finite set of countable choices of α. Though in theory uniqueness is not guaranteed when α is chosen from 0.5 to 1, in practice the chance of hitting such scenarios is extremely slim, almost impossible due to quantization errors in the system. Furthermore, in natural languages, normally a word does not appear repeatedly within a near context. Simply put, FOFE is capable of uniquely encoding any sequence of arbitrary length, serving as a fixed-size but theoretically lossless representation for any sequence. 1239 Figure 1: Illustration of the local detection approach for NER using FOFE codes as input and an FFNN as model. The window currently examines the fragment of Toronto Maple Leafs. The window will scan and scrutinize all fragments up to K words. 3.3 Character-level Models in NLP Kim et al. (2016) model morphology in the character level since this may provide some additional advantages in dealing with unknown or out-ofvocabulary (OOVs) words in a language. In the literature, convolutional neural networks (CNNs) have been widely used as character-level models in NLP (Kim et al., 2016). A trainable character embedding is initialized based on a set of possible characters. When a word fragment comes, character vectors are retrieved according to its spelling to construct a matrix. This matrix can be viewed as a single-channel image. CNN is applied to generate a more abstract representation of the word fragment. The above FOFE method can be easily extended to model character-level feature in NLP. Any word, phrase or fragment can be viewed as a sequence of characters. Based on a pre-defined set of all possible characters, we apply the same FOFE method to encode the sequence of characters. This always leads to a fixed-size representation, irrelevant to the number of characters in question. For example, a word fragment of “Walmart” may be viewed as a sequence of seven characters: ‘W’, ‘a’, ‘l’, ‘m’, ‘a’, ‘r’, ‘t’. The FOFE codes of character sequences are always fixed-sized and they can be directly fed to an FFNN for morphology modeling. 4 FOFE-based Local Detection for NER As described above, our FOFE-based local detection approach for NER, called FOFE-NER hereafter, is motivated by the way how human actually infers whether a word segment in text is an entity or mention, where the entity types of the other entities in the same sentence is not a must. Particularly, the dependency between adjacent entities is fairly weak in NER. Whether a fragment is an entity or not, and what class it may belong to, largely depend on the internal structure of the fragment itself as well as the left and right contexts in which it appears. To a large extent, the meaning and spelling of the underlying fragment are informative to distinguish named entities from the rest of the text. Contexts play a very important role in NER or MD when it involves multi-sense words/phrases or out-of-vocabulary (OOV) words. As shown in Figure 1, our proposed FOFENER method will examine all possible fragments in text (up to a certain length) one by one. For each fragment, it uses the FOFE method to fully encode the underlying fragment itself, its left context and right context into some fixed-size representations, which are in turn fed to an FFNN to predict whether the current fragment is NOT a valid entity mention (NONE), or its correct entity type (PER, LOC, ORG and so on) if it is a valid mention. This method is appealing because the FOFE codes serves as a theoretically lossless representation of the hypothesis and its full contexts. FFNN is used as a universal approximator to map from text to the entity labels. In this work, we use FOFE to explore both word-level and character-level features for each fragment and its contexts. 4.1 Word-level Features FOFE-NER generates several word-level features for each fragment hypothesis and its left and right contexts as follows: • Bag-of-word (BoW) of the fragment, e.g. 1240 bag-of-word vector of ‘Toronto’, ‘Maple’ and ‘Leafs’ in Figure 1. • FOFE code for left context including the fragment, e.g. FOFE code of the word sequence of “... puck from space for the Toronto Maple Leafs” in Figure 1. • FOFE code for left context excluding the fragment, e.g. the FOFE code of the word sequence of “... puck from space for the” in Figure 1.. • FOFE code for right context including the fragment, e.g. the FOFE code of the word sequence of “... against opener home ’ Leafs Maple Toronto” in Figure 1. • FOFE code for right context excluding the fragment, e.g. the FOFE code of the word sequence of “... against opener home ” in Figure 1. Moreover, all of the above word features are computed for both case-sensitive words in raw text as well as case-insensitive words in normalized lower-case text. These FOFE codes are projected to lower-dimension dense vectors based on two projection matrices, Ws and Wi, for casesensitive and case-insensitive FOFE codes respectively. These two projection matrices are initialized by word embeddings trained by word2vec, and fine-tuned during the learning of the neural networks. Due to the recursive computation of FOFE codes in eq.(1), all of the above FOFE codes can be jointly computed for one sentence or document in a very efficient manner. 4.2 Character-level Features On top of the above word-level features, we also augment character-level features for the underlying segment hypothesis to further model its morphological structure. For the example in Figure 1, the current fragment, Toronto Maple Leafs, is considered as a sequence of case-sensitive characters, i.e. “{‘T’, ‘o’, ..., ‘f’ , ‘s’ }”, we then add the following character-level features for this fragment: • Left-to-right FOFE code of the character sequence of the underlying fragment. That is the FOFE code of the sequence, “‘T’, ‘o’, ..., ‘f’ , ‘s’ ”. • Right-to-left FOFE code of the character sequence of the underlying fragment. That is the FOFE code of the sequence, “‘s’ , ‘f’ , ..., ‘o’, ‘T’ ”. These case-sensitive character FOFE codes are also projected by another character embedding matrix, which is randomly initialized and finetuned during model training. Alternatively, we may use the character CNNs, as described in Section 3.3, to generate characterlevel features for each fragment hypothesis as well. 5 Training and Decoding Algorithm Obviously, the above FOFE-NER model will take each sentence of words, S = [w1, w2, w3, ..., wm], as input, and examine all continuous subsequences [wi, wi+1, wi+2, ..., wj] up to n words in S for possible entity types. All sub-sequences longer than n words are considered as non-entities in this work. When we train the model, based on the entity labels of all sentences in the training set, we will generate many sentence fragments up to n words. These fragments fall into three categories: • Exact-match with an entity label, e.g., the fragment “Toronto Maple Leafs” in the previous example. • Partial-overlap with an entity label, e.g., “for the Toronto”. • Disjoint with all entity label, e.g. “from space for”. For all exact-matched fragments, we generate the corresponding outputs based on the types of the matched entities in the training set. For both partial-overlap and disjoint fragments, we introduce a new output label, NONE, to indicate that these fragments are not a valid entity. Therefore, the output nodes in the neural networks contains all entity types plus a rejection option denoted as NONE. During training, we implement a producerconsumer software design such that a thread fetches training examples, computes all FOFE codes and packs them as a mini-batch while the other thread feeds the mini-batches to neural networks and adjusts the model parameters and all projection matrices. Since “partial-overlap” and “disjoint” significantly outnumber “exact-match”, they are down-sampled so as to balance the data set. During inference, all fragments not longer than 1241 n words are all fed to FOFE-NER to compute their scores over all entity types. In practice, these fragments can be packed as one mini-batch so that we can compute them in parallel on GPUs. As the NER result, the FOFE-NER model will return a subset of fragments only if: i) they are recognized as a valid entity type (not NONE); AND ii) their NN scores exceed a global pruning threshold. Occasionally, some partially-overlapped or nested fragments may occur in the above pruned prediction results. We can use one of the following simple post-processing methods to remove overlappings from the final results: 1. highest-first: We check every word in a sentence. If it is contained by more than one fragment in the pruned results, we only keep the one with the maximum NN score and discard the rest. 2. longest-first: We check every word in a sentence. If it is contained by more than one fragment in the pruned results, we only keep the longest fragment and discard the rest. Either of these strategies leads to a collection of non-nested, non-overlapping, non-NONE entity labels. In some tasks, it may require to label all nested entities. This has imposed a big challenge to the sequence labeling methods. However, the above post-processing can be slightly modified to generate nested entities’ labels. In this case, we first run either highest-first or longest-first to generate the first round result. For every entity survived in this round, we will recursively run either highestfirst or longest-first on all entities in the original set, which are completely contained by it. This will generate more prediction results. This process may continue to allow any levels of nesting. For example, for a sentence of “w1 w2 w3 w4 w5”, if the model first generates the prediction results after the global pruning, as [“w2w3”, PER, 0.7], [“w3w4”, LOC, 0.8], [“w1w2w3w4”, ORG, 0.9], if we choose to run highest-first, it will generate the first entity label as [“w1w2w3w4”, ORG, 0.9]. Secondly, we will run highest-first on the two fragments that are completely contained by the first one, i.e., [“w2w3”, PER, 0.7], [“w3w4”, LOC, 0.8], then we will generate the second nested entity label as [“w3w4”, LOC, 0.8]. Fortunately, in any real NER and MD tasks, it is pretty rare to have overlapped predictions in the NN outputs. Therefore, the extra expense to run this recursive post-processing method is minimal. 6 Second-Pass Augmentation As we know, CRF brings marginal performance gain to all taggers (but not limited to NER) because of the dependancies (though fairly weak) between entity types. We may easily add this level of information to our model by introducing another pass of FOFE-NER. We call it 2nd-pass FOFENER. In 2nd-pass FOFE-NER, another set of model is trained on outputs from the first-pass FOFENER, including all predicted entities. For example, given a sentence S = [w1, w2, ...wi, ...wj, ...wn] and an underlying word segment [wi, ..., wj] in the second pass, every predicted entity outside this segment is substituted by its entity type predicted from the first pass. For example, in the first pass, a sentence like “Google has also recruited Fei-Fei Li, director of the AI lab at Stanford University.” is predicted as: “<ORG> has also recruited FeiFei Li, director of the AI lab at <ORG>.” In 2ndpass FOFE-NER, when examining the segment “Fei-Fei Li”, the predicted entity types <ORG> are used to replace the actual named entities. The 2nd-pass FOFE-NER model is trained on the outputs of the first pass, where all detected entities are replaced by their predicted types as above. During inference, the results returned by the 1st-pass model are substituted in the same way. The scores for each hypothesis from 1st-pass model and 2nd-pass model are linear interpolated and then decoded by either highest-first or longestfirst to generate the final results of 2nd-pass FOFE-NER. Obviously, 2nd-pass FOFE-NER may capture the semantic roles of other entities while filtering out unwanted constructs and sparse combonations. On the other hand, it enables longer context expansion, since FOFE memorizes contextual information in an unselective decaying fashion. 7 Experiments In this section, we evaluate the effectiveness of our proposed methods on several popular NER and MD tasks, including CoNLL 2003 NER task and TAC-KBP2015 and TAC-KBP2016 Trilingual Entity Discovery and Linking (EDL) tasks. 1242 We have made our codes available at https:// github.com/xmb-cipher/fofe-ner for readers to reproduce the results in this paper. 7.1 CoNLL 2003 NER task The CoNLL-2003 dataset (Sang and Meulder, 2003) consists of newswire from the Reuters RCV1 corpus tagged with four types of nonnested named entities: location (LOC), organization (ORG), person (PER), and miscellaneous (MISC). The top 100,000 words, are kept as vocabulary, including punctuations. For the case-sensitive embedding, an OOV is mapped to <unk> if it contains no upper-case letter and <UNK> otherwise. We perform grid search on several hyperparameters using a held-out dev set. Here we summarize the set of hyper-parameters used in our experiments: i) Learning rate: initially set to 0.128 and is multiplied by a decay factor each epoch so that it reaches 1/16 of the initial value at the end of the training; ii) Network structure: 3 fully-connected layers of 512 nodes with ReLU activation, randomly initialized based on a uniform distribution between − q 6 Ni+No and q 6 Ni+No (Glorot et al., 2011); iii) Character embeddings: 64 dimensions, randomly initialized. iv) mini-batch: 512; v) Dropout rate: initially set to 0.4, slowly decreased during training until it reaches 0.1 at the end. vi) Number of epochs: 128; vii)Embedding matrices case-sensitive and caseinsensitive word embeddings of 256 dimensions, trained from Reuters RCV1; viii) We stick to the official data train-dev-test partition. ix) Forgetting factor α = 0.5. 1 We have investigated the performance of our method on the CoNLL-2003 dataset by using different combinations of the FOFE features (both word-level and character-level). The detailed comparison results are shown in Table 1. In Table 2, we have compared our best performance with some top-performing neural network systems on this task. As we can see from Table 2, our system (highest-first decoding) yields very strong performance (90.85 in F1 score) in this task, outperforming most of neural network models reported on this 1The choice of the forgetting factor α is empirical. We’ve evaluated α = 0.5, 0.6, 0.7, 0.8 on a development set in some early experiments. It turns out that α = 0.5 is the best. As a result, α = 0.5 is used for all NER/MD tasks throughout this paper. dataset. More importantly, we have not used any hand-crafted features in our systems, and all features (either word or char level) are automatically derived from the data. Highest-first and longestfirst perform similarly. In (Chiu and Nichols, 2016)2, a slightly better performance (91.62 in F1 score) is reported but a customized gazetteer is used in theirs. 7.2 KBP2015 EDL Task Given a document collection in three languages (English, Chinese and Spanish), the KBP2015 trilingual EDL task (Ji et al., 2015) requires to automatically identify entities (including nested entities) from a source collection of textual documents in multiple languages as in Table 3, and classify them into one of the following pre-defined five types: Person (PER), Geo-political Entity (GPE), Organization (ORG), Location (LOC) and Facility (FAC). The corpus consists of news articles and discussion forum posts published in recent years, related but non-parallel across languages. Three models are trained and evaluated independently. Unless explicitly listed, hyperparameters follow those used for CoNLL2003 as described in section 7.1 and 2nd-pass model is not used. Three sets of word embeddings of 128 dimensions are derived from English Gigaword (Parker et al., 2011), Chinese Gigaword (Graff and Chen, 2005) and Spanish Gigaword (Mendonca et al., 2009) respectively. Some language-specific modifications are made: • Chinese: Because Chinese segmentation is not reliable, we label Chinese at character level. The analogous roles of case-sensitive word-embedding and case-sensitive wordembedding are played by character embedding and word-embedding in which the character appears. Neither Char FOFE features nor Char CNN features are used for Chinese. • Spanish: Character set of Spanish is a super set of that of English. When building character-level features, we use the mod function to hash each character’s UTF8 encoding into a number between 0 (inclusive) and 128 (exclusive). As shown in Table 4, our FOFE-based local detection method has obtained fairly strong perfor2In their work, they have used a combination of trainingset and dev-set to train the model, differing from all other systems (including ours) in Table 2. 1243 FEATURE P R F1 word-level case-insensitive context FOFE incl. word fragment 86.64 77.04 81.56 context FOFE excl. word fragment 53.98 42.17 47.35 BoW of word fragment 82.92 71.85 76.99 case-sensitive context FOFE incl. word fragment 88.88 79.83 84.12 context FOFE excl. word fragment 50.91 42.46 46.30 BoW of word fragment 85.41 74.95 79.84 char-level Char FOFE of word fragment 67.67 52.78 59.31 Char CNN of word fragment 78.93 69.49 73.91 all case-insensitive features 90.11 82.75 86.28 all case-sensitive features 90.26 86.63 88.41 all word-level features 92.03 86.08 88.96 all word-level & Char FOFE features 91.68 88.54 90.08 all word-level & Char CNN features 91.80 88.58 90.16 all word-level & all char-level features 93.29 88.27 90.71 all features + dev set + 5-fold cross-validation 92.58 89.31 90.92 all features + 2nd-pass 92.13 89.61 90.85 all features + 2nd-pass + dev set + 5-fold cross-validation 92.62 89.77 91.17 Table 1: Effect of various FOFE feature combinations on the CoNLL2003 test data. algorithm word char gaz cap pos F1 CNN-BLSTM-CRF (Collobert et al., 2011)      89.59 BLSTM-CRF (Huang et al., 2015)      90.10 BLSTM-CRF (Rondeau and Su, 2016)      89.28 BLSTM-CRF, char-CNN (Chiu and Nichols, 2016)      91.62 Stack-LSTM-CRF, char-LSTM (Lample et al., 2016)      90.94 this work      90.85 Table 2: Performance (F1 score) comparison among various neural models reported on the CoNLL dataset, and the different features used in these methods. English Chinese Spanish ALL Train 168 147 129 444 Eval 167 167 166 500 Table 3: Number of Documents in KBP2015 2015 track best ours P R F1 P R F1 Trilingual 75.9 69.3 72.4 78.3 69.9 73.9 English 79.2 66.7 72.4 77.1 67.8 72.2 Chinese 79.2 74.8 76.9 79.3 71.7 75.3 Spanish 78.4 72.2 75.2 79.9 71.8 75.6 Table 4: Entity Discovery Performance of our method on the KBP2015 EDL evaluation data, with comparison to the best systems in KBP2015 official evaluation. mance in the KBP2015 dataset. The overall trilingual entity discovery performance is slightly better than the best systems participated in the official KBP2015 evaluation, with 73.9 vs. 72.4 as measured by F1 scores. Outer and inner decodings are longest-first and highest-first respectively. 7.3 KBP2016 EDL task In KBP2016, the trilingual EDL task is extended to detect nominal mentions of all 5 entity types for all three languages. In our experiments, for simplicity, we treat nominal mention types as some extra entity types and detect them along with named entities together with a single model. 7.3.1 Data Description No official training set is provided in KBP2016. We make use of three sets of training data: • Training and evaluation data in KBP2015: as described in 7.2 1244 LANG NAME NOMINAL OVERALL 2016 BEST P R F1 P R F1 P R F1 P R F1 ENG 0.898 0.789 0.840 0.554 0.336 0.418 0.836 0.680 0.750 0.846 0.710 0.772 CMN 0.848 0.702 0.768 0.414 0.258 0.318 0.789 0.625 0.698 0.789 0.737 0.762 SPA 0.835 0.778 0.806 0.000 0.000 0.000 0.835 0.602 0.700 0.839 0.656 0.736 ALL 0.893 0.759 0.821 0.541 0.315 0.398 0.819 0.639 0.718 0.802 0.704 0.756 Table 5: Official entity discovery performance of our methods on KBP2016 trilingual EDL track. Neither KBP2015 nor in-house data labels nominal mentions. Nominal mentions in Spanish are totally ignored since no training data is found for them. training data P R F1 KBP2015 0.836 0.598 0.697 KBP2015 + WIKI 0.837 0.628 0.718 KBP2015 + in-house 0.836 0.680 0.750 Table 6: Our entity discovery official performance (English only) in KBP2016 is shown as a comparison of three models trained by different combinations of training data sets. • Machine-labeled Wikipedia (WIKI): When terms or names are first mentioned in a Wikipedia article they are often linked to the corresponding Wikipedia page by hyperlinks, which clearly highlights the possible named entities with well-defined boundary in the text. We have developed a program to automatically map these hyperlinks into KBP annotations by exploring the infobox (if existing) of the destination page and/or examining the corresponding Freebase types. In this way, we have created a fairly large amount of weakly-supervised trilingual training data for the KBP2016 EDL task. Meanwhile, a gazeteer is created and used in KBP2016. • In-house dataset: A set of 10,000 English and Chinese documents is manually labeled using some annotation rules similar to the KBP 2016 guidelines. We split the available data into training, validation and evaluation sets in a ratio of 90:5:5. The models are trained for 256 epochs if the in-house data is not used, and 64 epochs otherwise. 7.3.2 Effect of various training data In our first set of experiments, we investigate the effect of using different training data sets on the final entity discovery performance. Different training runs are conducted on different combinations of the aforementioned data sources. In Table 6, we have summarized the official English entity discovery results from several systems we submitted to KBP2016 EDL evaluation round I and II. The first system, using only the KBP2015 data to train the model, has achieved 0.697 in F1 score in the official KBP2016 English evaluation data. After adding the weakly labeled data, WIKI, we can see the entity discovery performance is improved to 0.718 in F1 score. Moreover, we can see that it yields even better performance by using the KBP2015 data and the in-house data sets to train our models, giving 0.750 in F1 score. 7.3.3 The official trilingual EDL performance in KBP2016 The official best results of our system are summarized in Table 5. We have broken down the system performance according to different languages and categories of entities (named or nominal). Our system, achieving 0.718 in F1 score in the KBP2016 trilingual EDL track, ranks second among all participants. Note that our result is produced by a single system while the top system is a combination of two different models, each of which is based on 5-fold cross-validation (Liu et al., 2016). 8 Conclusion In this paper, we propose a novel solution to NER and MD by applying FFNN on top of FOFE features. This simple local-detection based approach has achieved almost state-of-the-art performance on various NER and MD tasks, without using any external knowledge or feature engineering. Acknowledgement This work is supported mainly by a research donation from iFLYTEK Co., Ltd., Hefei, China, and partially by a discovery grant from Natural Sciences and Engineering Research Council (NSERC) of Canada. 1245 References Andrew Borthwick, John Sterling, Eugene Agichtein, and Ralph Grishman. 1998. Exploiting diverse knowledge sources via maximum entropy in named entity recognition. In Proc. of the Sixth Workshop on Very Large Corpora. volume 182. http://ucrel.lancs.ac.uk/acl/W/W98/W98-1118.pdf. Jason P. C. Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTMCNNs. Transactions of the Association for Computational Linguistics 4:357–370. https://www.aclweb.org/anthology/Q16-1026. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12(Aug):2493–2537. http://www.jmlr.org/papers/volume12/collobert11a /collobert11a.pdf. Cıcero dos Santos, Victor Guimaraes, RJ Niter´oi, and Rio de Janeiro. 2015. Boosting named entity recognition with neural character embeddings. In Proceedings of NEWS 2015 The Fifth Named Entities Workshop. Association for Computational Linguistics (ACL), page 25. https://doi.org/10.18653/v1/w15-3904. X. Glorot, A. Bordes, and Y. Bengio. 2011. Deep sparse rectifier neural networks. In International Conference on Artificial Intelligence and Statistics. JMLR W&CP:. volume 15, pages 315–323. http://www.jmlr.org/proceedings/papers/v15/glorot11a /glorot11a.pdf. David Graff and Ke Chen. 2005. Chinese gigaword. LDC Catalog No.: LDC2003T09, ISBN 1:58563– 58230. Alex Graves. 2012. Neural networks. In Supervised Sequence Labelling with Recurrent Neural Networks, Springer, pages 15–35. https://doi.org/10.1007/978-3-642-24797-2. Kurt Hornik. 1991. Approximation capabilities of multilayer feedforward networks. Neural Networks 4(2):251–257. https://doi.org/10.1016/08936080(91)90009-t. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991 https://arxiv.org/abs/1508.01991. Heng Ji, Joel Nothman, and Ben Hachey. 2015. Overview of tac-kbp2015 tri-lingual entity discovery and linking. In Proceedings of Text Analysis Conference (TAC2015). http://nlp.cs.rpi.edu/paper/kbp2015.pdf. Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. 2016. Character-aware neural language models. In AAAI. Citeseer. https://arxiv.org/abs/1508.06615. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360 https://arxiv.org/abs/1603.01360. Dan Liu, Wei Lin, Shiliang Zhang, Si Wei, and Hui Jiang. 2016. Neural networks models for entity discovery and linking. arXiv preprint arXiv:1611.03558 https://arxiv.org/abs/1611.03558. Quan Liu, Hui Jiang, Si Wei, Zhen-Hua Ling, and Yu Hu. 2015. Learning semantic word embeddings based on ordinal knowledge constraints. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 1501–1511. http://www.aclweb.org/anthology/P15-1145. Angelo Mendonca, David Andrew Graff, and Denise DiPersio. 2009. Spanish gigaword second edition. Linguistic Data Consortium. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. pages 3111–3119. https://papers.nips.cc/paper/5021-distributedrepresentations-of-words-and-phrases-and-theircompositionality.pdf. Truc-Vien T Nguyen, Alessandro Moschitti, and Giuseppe Riccardi. 2010. Kernel-based reranking for named-entity extraction. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters. Association for Computational Linguistics, pages 901–909. http://www.anthology.aclweb.org/C/C10/C102104.pdf. Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2011. English gigaword. Linguistic Data Consortium . Marc-Antoine Rondeau and Yi Su. 2016. LSTMbased NeuroCRFs for named entity recognition. In Interspeech 2016. International Speech Communication Association, pages 665–669. https://doi.org/10.21437/interspeech.2016-288. Erik F Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003,. page 142147. http://www.aclweb.org/anthology/W030419. A. Viterbi. 1967. Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. IEEE Transactions on Information Theory 13(2):260–269. https://doi.org/10.1109/tit.1967.1054010. 1246 Shiliang Zhang, Hui Jiang, Shifu Xiong, Si Wei, and Li-Rong Dai. 2016. Compact feedforward sequential memory networks for large vocabulary continuous speech recognition. In Interspeech 2016. International Speech Communication Association. https://doi.org/10.21437/interspeech.2016-121. Shiliang Zhang, Hui Jiang, Mingbin Xu, Junfeng Hou, and Lirong Dai. 2015a. A fixedsize encoding method for variable-length sequences with its application to neural network language models. arXiv preprint arXiv:1505.01504. https://arxiv.org/abs/1505.01504. Shiliang Zhang, Hui Jiang, Mingbin Xu, Junfeng Hou, and Lirong Dai. 2015b. The fixed-size ordinallyforgetting encoding method for neural network language models. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/p15-2081. 1247
2017
114
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1248–1259 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1115 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1248–1259 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1115 Vancouver Welcomes You! Minimalist Location Metonymy Resolution Milan Gritta, Mohammad Taher Pilehvar, Nut Limsopatham and Nigel Collier Language Technology Lab Department of Theoretical and Applied Linguistics University of Cambridge {mg711,mp792,nl347,nhc30}@cam.ac.uk Abstract Named entities are frequently used in a metonymic manner. They serve as references to related entities such as people and organisations. Accurate identification and interpretation of metonymy can be directly beneficial to various NLP applications, such as Named Entity Recognition and Geographical Parsing. Until now, metonymy resolution (MR) methods mainly relied on parsers, taggers, dictionaries, external word lists and other handcrafted lexical resources. We show how a minimalist neural approach combined with a novel predicate window method can achieve competitive results on the SemEval 2007 task on Metonymy Resolution. Additionally, we contribute with a new Wikipedia-based MR dataset called RelocaR, which is tailored towards locations as well as improving previous deficiencies in annotation guidelines. 1 Introduction In everyday language, we come across many types of figurative speech. These irregular expressions are understood with little difficulty by humans but require special attention in NLP. One of these is metonymy, a type of common figurative language, which stands for the substitution of the concept, phrase or word being meant with a semantically related one. For example, in “Moscow traded gas and aluminium with Beijing.”, both location names were substituted in place of governments. Named Entity Recognition (NER) taggers have no provision for handling metonymy, meaning that this frequent linguistic phenomenon goes largely undetected within current NLP. Classification decisions presently focus on the entity using features such as orthography to infer its word sense, largely ignoring the context, which provides the strongest clue about whether a word is used metonymically. A common classification approach is choosing the N words to the immediate left and right of the entity or the whole paragraph as input to the model. However, this “greedy” approach also processes input that should in practice be ignored. Metonymy is problematic for applications such as Geographical Parsing (Monteiro et al., 2016; Gritta et al., 2017, GP) and other information extraction tasks in NLP. In order to accurately identify and ground location entities, for example, we must recognise that metonymic entities constitute false positives and should not be treated the same way as regular locations. For example, in “London voted for the change.”, London refers to the concept of “people” and should not be classified as a location. There are many types of metonymy (Shutova et al., 2013), however, in this paper, we primarily address metonymic location mentions with reference to GP and NER. Contributions: (1) We investigate how to improve classification tasks by introducing a novel minimalist method called Predicate Window (PreWin), which outperforms common feature selection baselines. Our final minimalist classifier is comparable to systems which use many external features and tools. (2) We improve the annotation guidelines in MR and contribute with a new Wikipedia-based MR dataset called ReLocaR to address the training data shortage. (3) We make an annotated subset of the CoNLL 2003 (NER) Shared Task available for extra MR training data, alongside models, tools and other data. 1248 2 Related Work Some of the earliest work on MR that used an approach similar to our method (machine learning and dependency parsing) was by Nissim and Markert (2003a). The decision list classifier with backoff was evaluated using syntactic head-modifier relations, grammatical roles and a thesaurus to overcome data sparseness and generalisation problems. However, the method was still limited for classifying unseen data. Our method uses the same paradigm but adds more features, a different machine learning architecture and a better usage of the parse tree structure. Much of the later work on MR comes from the SemEval 2007 Shared Task 8 (Markert and Nissim, 2007) and later by Markert and Nissim (2009). The feature set of Nissim and Markert (2003a) was updated to include: grammatical role of the potentially metonymic word (PMW) (such as subj, obj), lemmatised head/modifier of PMW, determiner of PMW, grammatical number of PMW (singular, plural), number of words in PMW and number of grammatical roles of PMW in current context. The winning system by Farkas et al. (2007) used these features and a maximum entropy classifier to achieve 85.2% accuracy. This was also the “leanest” system but still made use of feature engineering and some external tools. Brun et al. (2007) achieved 85.1% accuracy using local syntactical and global distributional features generated with an adapted, proprietary Xerox deep parser. This was the only unsupervised approach, based on using syntactic context similarities calculated on large corpora such as the the British National Corpus (BNC) with 100M tokens. Nastase and Strube (2009) used a Support Vector Machine (SVM) with handcrafted features (in addition to the features provided by Markert and Nissim (2007)) including grammatical collocations extracted from the BNC to learn selectional preferences, WordNet 3.0, Wikipedia’s category network, whether the entity “has-a-product” such as Suzuki and whether the entity “has-an-event” such as Vietnam (both obtained from Wikipedia). The bigger set of around 60 features and leveraging global (paragraph) context enabled them to achieve 86.1% accuracy. Once again, we draw attention to the extra training, external tools and additional feature generation. Similar recent work by Nastase and Strube (2013) which extends that of Nastase et al. (2012) involved transforming Wikipedia into a large-scale multilingual concept network called WikiNet. By building on Wikipedia’s existing network of categories and articles, their method automatically discovers new relations and their instances. As one of their extrinsic evaluations, metonymy resolution was tested. Global context (whole paragraph) was used to interpret the target word. Using an SVM and a powerful knowledge base built from Wikipedia, the highest performance to date (a 0.1% improvement from Nastase and Strube (2009)) was achieved at 86.2%, which has remained the SOTA until now. The related work on MR so far has made limited use of dependency trees. Typical features came in the form of a head dependency of the target entity, its dependency label and its role (subj-of-win, dobj-of-visit, etc). However, other classification tasks made good use of dependency trees. Liu et al. (2015) used the shortest dependency path and dependency sub-trees successfully to improve relation classification (new SOTA on SemEval 2010 Shared Task). Bunescu and Mooney (2005) show that using dependency trees to generate the input sequence to a model performs well in relation extraction tasks. Dong et al. (2014) used dependency parsing for Twitter sentiment classification to find the words syntactically connected to the target of interest. Joshi and Penstein-Ros´e (2009) used dependency parsing to explore how features based on syntactic dependency relations can be used to improve performance on opinion mining. In unsupervised lymphoma (type of cancer) classification, Luo et al. (2014) constructed a sentence graph from the results of a two-phase dependency parse to mine pathology reports for the relationships between medical concepts. Our methods also exploit the versatility of dependency parsing to leverage information about the sentence structure. 2.1 SemEval 2007 Dataset Our main standard for performance evaluation is the SemEval 2007 Shared Task 8 (Markert and Nissim, 2007) dataset first introduced in Nissim and Markert (2003b). Two types of entities were evaluated, organisations and locations, randomly retrieved from the British National Corpus (BNC). 1249 We only use the locations dataset, which comprises a train (925 samples) and a test (908 samples) partition. For medium evaluation, the classes are literal (geographical territories and political entities), metonymic (place-for-people, place-forproduct, place-for-event, capital-for-government or place-for-organisation) and mixed (metonymic and literal frames invoked simultaneously or unable to distinguish). The metonymic class further breaks down into two levels of subclasses allowing for fine evaluation. The class distribution within SemEval is approx 80% literal, 18% metonymic and 2% mixed. This seems to be the approximate natural distribution of the classes for location metonymy, which we have also observed while sampling Wikipedia for our new dataset. 3 Our Approach Our contribution broadly divides into two main parts, data and methodology. Section 3 introduces our new dataset, Section 4 introduces our new feature extraction method. 3.1 Design and Motivation As part of our contribution, we created a new MR dataset called ReLocaR (Real Location Retrieval), partly due to the lack of quality annotated train/test data and partly because of the shortcomings with the SemEval 2007 dataset (see Section 3.2). Our corpus is designed to evaluate the capability of a classifier to distinguish literal, metonymic and mixed location mentions. In terms of dataset size, ReLocaR contains 1,026 training and 1,000 test instances. The data was sampled using Wikipedia’s Random Article API1. We kept the sentences, which contained at least one of the places from a manually compiled list2 of countries and capitals of the world. The natural distribution of literal versus metonymic examples is approximately 80/20 so we had to discard the excess literal examples during sampling to balance the classes. 3.2 ReLocaR - Improvements over SemEval 1. We do not break down the metonymic class further as the distinction between the subclasses is subtle and hard to agree on. 2. The distribution of the three classes in ReLocaR (literal, metonymic, mixed) is approximately 1https://www.mediawiki.org/wiki/API:Random 2https://github.com/milangritta/Minimalist-LocationMetonymy-Resolution/data/locations.txt (49%, 49%, 2%) eliminating the high bias (80%, 18%, 2%) of SemEval. We will show how such a high bias transpires in the test results (Section 5). 3. We have reviewed the annotation of the test partition and found that we disagreed with up to 11% of the annotations. Zhang and Gelernter (2015) disagreed with the annotation 8% of the time. Poibeau (2007) also challenged some annotation decisions. ReLocaR was annotated by 4 trained linguists (undergraduate and graduate) and 2 computational linguists (authors). Linguists were independently instructed (see section 3.3) to assign one of the two classes to each example with little guidance. We leveraged their linguistic training and expertise to make decisions rather than imposing some specific scheme. Unresolved sentences would receive the mixed class label. 4. The most prominent difference is a small change in the annotation scheme (after independent linguistic advice). The SemEval 2007 Task 8 annotation scheme (Markert and Nissim, 2007) considers the political entity interpretation a literal reading. It suggests that in “Britain’s current account deficit...”, Britain refers to a literal location, rather than a government (which is an organisation). This is despite acknowledging that “The locative and the political sense is often distinguished in dictionaries as well as in the ACE annotation scheme...”. In ReLocaR datasets, we consider a political entity a metonymic reading. 3.2.1 Why government is not a location A government/nation/political entity is semantically much closer to Organisation/Person than a Location. “Moscow talks to Beijing.” does not tell us where this is happening. It most likely means a politician is talking to another politician. These are not places but people and/or groups. It is paramount to separate references to “inanimate” places from references to “animate” entities. 3.3 Annotation Guidelines (Summary) ReLocaR has three classes, literal, metonymic and mixed. Literal reading comprises territorial interpretations (the geographical territory, the land, soil and physical location) i.e. inanimate places that serve to point to a set of coordinates (where something might be located and/or happening) such as “The treaty was signed in Italy.”, “Peter comes from Russia.”, “Britain’s 1250 Andy Murray won the Grand Slam today.”, “US companies increased exports by 50%.”, “China’s artists are among the best in the world.” or “The reach of the transmission is as far as Brazil.”. A metonymic reading is any location occurrence that expresses animacy (Coulson and Oakley, 2003) such as “Jamaica’s indifference will not improve the negotiations.”, “Sweden’s budget deficit may rise next year.”. The following are other metonymic scenarios: a location name, which stands for any persons or organisations associated with it such as “We will give aid to Afghanistan.”, a location as a product such as “I really enjoyed that delicious Bordeaux.”, a location posing as a sports team “India beat Pakistan in the playoffs.”, a governmental or other legal entity posing as a location “Zambia passed a new justice law today.”, events acting as locations “Vietnam was a bad experience for me”. The mixed reading is assigned in two cases: either both readings are invoked at the same time such as in “The Central European country of Slovakia recently joined the EU.” or there is not enough context to ascertain the reading i.e. both are plausible such as in “We marvelled at the art of ancient Mexico.”. In difficult cases such as these, the mixed class is assigned. 3.4 Inter-Annotator Agreement We give the IAA for the test partition only. The whole dataset was annotated by the first author as the main annotator. Two pairs of annotators (4 linguists) then labelled 25% of the dataset each for a 3-way agreement. The agreement before adjudication was 91% and 93%, 97.2% and 99.2% after adjudication (for pair one and two respectively). The other 50% of sentences were then once again labelled by the main annotator with a 97% agreement with self. The remainder of the sentences (unable to agree on among annotators even after adjudication) were labelled as a mixed class (1.8% of all sentences). 3.5 CoNLL 2003 and MR We have also annotated a small subset of the CoNLL 2003 NER Shared Task data for metonymy resolution (locations only). Respecting the Reuters RCV1 Corpus (Lewis et al., 2004) distribution permissions3, we make only a heavily processed subset available on GitHub4. There are 4,089 positive (literal) and 2,126 negative (metonymic) sentences to assist with algorithm experimentation and model prototyping. Due to the lack of annotated training data for MR, this is a valuable resource. The data was annotated by the first author, there are no IAA figures. 4 Methodology 4.1 Predicate Window (PreWin) Through extensive experimentation and observation, we arrived at the intuition behind PreWin, our novel feature extraction method. The classification decision of the class of the target entity is mostly informed not by the whole sentence (or paragraph), rather it is a small and focused “predicate window” pointed to by the entity’s head dependency. In other words, most of the sentence is not only superfluous for the task, it actually lowers the accuracy of the model due to irrelevant input. This is particularly important in metonymy resolution as the entity’s surface form is not taken into consideration, only its context. In Figure 1, we show the process of extracting the Predicate Window from a sample sentence (more examples are available in the Appendix). We start by using the SpaCy dependency parser by Honnibal and Johnson (2015), which is the fastest in the world, open source and highly customisable. Each dependency tree provides the following features: dependency labels and entity head dependency. Rather than using most of the tree, we only use a single local head dependency relationship to point to the predicate. Leveraging a dependency parser helps PreWin with selecting the minimum relevant input to the model while discarding irrelevant input, which may cause the neural model to behave unpredictably. Finally, the entity itself is never used as input in any of our methods, we only rely on context. PreWin then extracts up to 5 words and their dependency labels starting at the head of the entity (see the next paragraph for exceptions), going in the away (from the entity) direction. The method always skips the conjunct (“and”, “or”) 3http://trec.nist.gov/data/reuters/reuters.html 4https://github.com/milangritta/Minimalist-LocationMetonymy-Resolution 1251 Figure 1: The predicate window starts at the head of the target entity and ends up to 4 words further, going away from the entity. The “conj” relations are always skipped. In the above example, the head of “UK” is “decided” so PreWin takes 5 words plus dependency labels as the input to the model. The left-hand side input to the model is empty and is set to zeroes (see Figure 2 for a full model diagram). relationships in order to find the predicate (see Figure 3 in the Appendix for a visual example of why this is important). The reason for the choice of 5 words is the balance between too much input, feeding the model with less relevant context and just enough context to capture the necessary semantics. We have experimented with lengths of 3-10 words, however 5 words typically achieved the best results. The following are the three types of exceptions when the output will not start with the head of the entity. In these cases, PreWin will include the neighbouring word as well. In a sentence “The pub is located in southern Zambia.”, the head of the entity is “in”, however in this case PreWin will include “southern” (adjectival modifier) as this carries important semantics for the classification. Similarly, PreWin will also include the neighbouring compound noun as in: “Lead coffins were very rare in colonial America.”, the output will include “colonial” as a feature plus the next four words. In another sentence: “Vancouver’s security is the best in the world.’, PreWin will include the “’s” (case) plus the next four words continuing from the head of the entity (the word “security”). 4.2 Neural Network Architecture The output of PreWin is used to train the following machine learning model. We decided to use the Long Short Term Memory (LSTM) architecture by Keras5 (Chollet, 2015). Two LSTMs are used, one for the left and right side (up to 5 words each). Two fully connected (dense) layers are used for the left and right dependency relation labels (up to 5https://keras.io/ 5 labels each, encoded as one-hot). The full architecture is available in the Appendix, please see Figure 2. You can download the models and data from GitHub6. LSTMs are excellent at processing language sequences (Hochreiter and Schmidhuber, 1997; Sak et al., 2014; Graves et al., 2013), which is why we use this architecture. It allows the model to encode the word sequences, preserve important word order and provide superior classification performance. Both the Multilayer Perceptron and the Convolutional Neural Network were consistently inferior (typically 5% - 10% lower accuracy) in our earlier performance comparisons. For all experiments, we used a vocabulary of the first (most frequent) 100,000 word vectors in GloVe7 (Pennington et al., 2014). Finally, unless explicitly stated otherwise, the standard dimension of word embeddings was 50, which we found to work best. 4.3 “Immediate” Baseline A common approach in lexical classification tasks is choosing the 5 to 10 words to the immediate right and left of the entity as input to a model (Mikolov et al., 2013; Mesnil et al., 2013; Baroni et al., 2014; Collobert et al., 2011). We evaluate this method (its 5 and 10-word variant) alongside PreWin and Paragraph. 4.4 Paragraph Baseline The paragraph baseline method extends the “immediate” one by taking 50 words from each side of the entity as the input to the classifier. In practice, this extends the feature window to include extrasentential evidence in the paragraph. This ap6https://github.com/milangritta/Minimalist-LocationMetonymy-Resolution 7http://nlp.stanford.edu/projects/glove/ 1252 proach is also popular in machine learning (Melamud et al., 2016; Zhang et al., 2016). 4.5 Ensemble of Models In addition to a single best performing model, we have combined several models trained on different data and/or using different model configurations. For the SemEval test, we combined three separate models trained on the newly annotated CoNLL dataset and the training data for SemEval. For the ReLocaR test, we once again let three models vote, trained on CooNLL and ReLocaR data. 5 Results We evaluate all methods using three datasets for training (ReLocaR, SemEval, CoNLL) and two for testing (ReLocaR, SemEval). Due to inherent randomness in the deep learning libraries, we performed 10 runs for each setup and averaged the figures (we also report standard deviation). 5.1 Metrics and Significance Following the SemEval 2007 convention, we use two metrics to evaluate performance, accuracy and f-scores (for each class). We only evaluate at the coarse level, which means literal versus nonliteral (metonymic and mixed are merged into one class). In terms of statistical significance, our best score on the SemEval dataset (908 samples) is not significant at the 95% confidence level. However, the accuracy improvements of PreWin over the common baselines are highly statistically significant with 99.9%+ confidence. 5.2 Predicate Window Tables 1 and 2 show PreWin performing consistently better than other baselines, in many instances, significantly better and with fewer words (smaller input). The standard deviation is also lower for PreWin meaning more stable test runs. Compared with the 5 and 10 window “immediate” baseline, which is the common approach in classification, PreWin is more discriminating with its input. Due to the linguistic variety and the myriad of ways the target word sense can be triggered in a sentence, it is not always the case that the 5 or 10 nearest words inform us of the target entity’s meaning/type. We ought to ask what else is being expressed in the same 5 to 10-word window? Conventional classification methods (Immediate, Paragraph) can also be seen as prioritising either feature precision or feature recall. Paragraph maximises the input sequence size, which maximises recall at the expense of including features that are either irrelevant or mislead the model, lowering precision. Immediate baseline maximises precision by using features close to the target entity at the expense of missing important features positioned outside of its small window, lowering recall. PreWin can be understood as an integration of both approaches. It retains high precision by limiting the size of the feature window to 5 while maximising recall by searching anywhere in the sentence, frequently outside of a limited “immediate” window. Perhaps we can also caution against a simple adherence to Firth (1957) “You shall know a word by the company it keeps”. This does not appear to be the case in our experiments as PreWin regularly performs better than the “immediate” baseline. Further prototypical examples of the method can be viewed in the Appendix. Our intuition that most words in the sentence, indeed in the paragraph do not carry the semantic information required to classify the target entity is ultimately based on evidence. The model uses only a small window, linked to the entity via a head dependency relationship for the final classification decision. 5.3 Common Errors Most of the time (typically 85% for the two datasets), PreWin is sufficient for an accurate classification. However, it does not work well in some cases. The typical 15% error rate breaks down as follows (percentages were estimated based on extensive experimentation and observation): Discarding important context (3%): Sometimes the 5 or 10 word “immediate” baseline method would actually have been preferred such as in the sentence “...REF in 2014 ranked Essex in the top 20 universities...”. PreWin discards the right-hand side input, which is required in this case for a correct classification. Since ”ranked” is the head of ”Essex”, the rest of the sentence gets ignored and the valuable context gets lost. More complex semantic patterns (11%): Many common mistakes were due to the lack 1253 of the model’s understanding of more complex predicates such as in the following sentences: “ ...of military presence of Germany.”, “Houston also served as a member and treasurer of the...” or ”...invitations were extended to Yugoslavia ...”. We think this is due to a lack of training data (around 1,000 sentences per dataset). Additional examples such as “...days after the tour had exited Belgium.” expose some of the limitations of the neural model to recognise uncommon ways of expressing a reference to a literal place. Recall that no external resources or tools were used to supplement the training/features, the model had to learn to generalise from what it has seen during training, which was limited in our experiments. Parsing mistakes (1%): were less common though still present. It is important to choose the right dependency parser for the task since different parsers will often generate slightly different parse trees. We have used SpaCy8 for all our experiments, which is a Python-based industrial strength NLP library. Sometimes, tokenisation errors for acronyms like “U.S.A.” and wrongly hyphenated words may also cause parsing errors, however, this was infrequent. Method Training (Size) Acc (STD) PreWin SemEval (925) 62.4 (2.30) Immediate 5 SemEval (925) 60.6 (2.34) Immediate 10 SemEval (925) 59.2 (2.26) Paragraph SemEval (925) 58.0 (2.49) PreWin CoNLL (6,215) 82.8 (0.46) Immediate 5 CoNLL (6,215) 78.2 (0.61) Immediate 10 CoNLL (6,215) 79.1 (0.76) Paragraph CoNLL (6,215) 79.5 (1.50) PreWin ReLocaR (1,026) 83.6 (0.71) Immediate 5 ReLocaR (1,026) 81.4 (1.34) Immediate 10 ReLocaR (1,026) 81.3 (1.44) Paragraph ReLocaR (1,026) 80.0 (2.25) Ensemble ReLocaR/CoNLL 84.8 (0.34) Table 1: Results for ReLocaR data. Figures are averaged over 10 runs. STD - Standard deviation. 5.4 Flexibility of Neural Model The top accuracy figures for ReLocaR are almost identical to SemEval. The highest single model 8https://spacy.io/ accuracy for ReLocaR was 83.6% (84.8% with Ensemble), which was within 0.5% of the equivalent methods for SemEval (83.1%, 84.6% for Ensemble). Both were achieved using the same methods (PreWin or Ensemble), neural architecture and size of corpora. When the models were trained on the CoNLL data, the accuracies were 82.8% and 79.5%. However, when the models trained on ReLocaR and tested on SemEval (and vice versa), accuracy dropped to between 62.4% and 69% showing that what was learnt does not seem to transfer well to another dataset. We think the reason for this is the difference in annotation guidelines; the government is a metonymic reading, not a literal one. This causes the model to make more mistakes. Method Training (Size) Acc (STD) PreWin SemEval (925) 83.1 (0.64) Immediate 5 SemEval (925) 81.3 (1.11) Immediate 10 SemEval (925) 81.9 (0.89) Paragraph SemEval (925) 81.3 (0.88) PreWin CoNLL (6,215) 79.5 (0.34) Immediate 5 CoNLL (6,215) 77.8 (1.47) Immediate 10 CoNLL (6,215) 77.8 (1.22) Paragraph CoNLL (6,215) 77.2 (2.10) PreWin ReLocaR (1,026) 69.0 (3.13) Immediate 5 ReLocaR (1,026) 63.6 (5.42) Immediate 10 ReLocaR (1,026) 64.2 (4.12) Paragraph ReLocaR (1,026) 64.4 (7.76) Nastase et al. SemEval (925) 86.2 (N/A) Ensemble SemEval/CoNLL 84.6 (0.43) Table 2: Results for SemEval data. Figures are averaged over 10 runs. STD - standard deviation. 5.5 Ensemble Method The highest accuracy and f-scores were achieved with the ensemble method for both datasets. We combined three models (previously described in section 4.5) for SemEval to achieve 84.6% accuracy and three models for ReLocaR to achieve 84.8% for the new dataset. Training separate models with different parameters and/or on different datasets does increase classification capability as various models learn distinct aspects of the task, enabling the 1.2 - 1.5% improvement. 5.6 Dimensionality of Word Embeddings We found that increasing dimension size (up to 300) did not materially improve performance. 1254 The neural network tended to overfit, even with fewer epochs, the results were comparable to our default 50-dimensional embeddings. We posit that fewer dimensions of the distributed word representations force the abstraction level higher as the meaning of words must be expressed more succinctly. We think this helps the model generalise better, particularly for smaller datasets. Lastly, learning word embeddings from scratch on datasets this small (around 1,000 samples) is possible but impractical, the performance typically decreases by around 5% if word embeddings are not initialised first. Dataset / Method Literal Non-Literal SemEval / PreWin 90.6 57.3 SemEval / SOTA 91.6 59.1 ReLocaR / PreWin 84.4 84.8 Table 3: Per class f-scores - all figures obtained using the Ensemble method, averaged over 10 runs. Note the model class bias for SemEval. 5.7 F-Scores and Class Imbalance Table 3 shows the SOTA f-scores, our best results for SemEval 2007 and the best f-scores for ReLocaR. The class imbalance inside SemEval (80% literal, 18% metonymic, 2% mixed) is reflected as a high bias in the final model. This is not the case with ReLocaR and its 49% literal, 49% metonymic and 2% mixed ratio of 3 classes. The model was equally capable of distinguishing between literal and non-literal cases. 5.8 Another baseline There was another baseline we tested, however, it was not covered anywhere so far because of its low performance. It was a type of extreme parse tree pruning, during which most of the sentence gets discarded and we only retain 3 to 4 content words. The method uses non-local (long range) dependencies to construct a short input sequence. However, the method was a case of ignoring too many relevant words and accuracy was fluctuating in the mid-60% range, which is why we did not report the results. However, it serves to further justify the choice of 5 words as the predicate window as fewer words caused the model to underperform. 6 Discussion 6.1 NER, GP and Metonymy We think the next frontier is a NER tagger, which actively handles metonymy. The task of labelling entities should be mainly driven by context rather than the word’s surface form. If the target entity looks like “London”, this should not mean the entity is automatically a location. Metonymy is a frequent linguistic phenomenon (around 20% of location mentions are metonymic, see section 3.1) and could be handled by NER taggers to enable many innovative downstream NLP applications. Geographical Parsing is a pertinent use case. In order to monitor/mine text documents for geographical information only, the current NER technology does not have a solution. We think it is incorrect for any NER tagger to label “Vancouver” as a location in “Vancouver welcomes you!”. A better output might be something like the following: Vancouver = location AND metonymy = True. This means Vancouver is usually a location but is used metonymically in this case. How this information is used will be up to the developer. Organisations behaving as persons, share prices or products are but a few other examples of metonymy. 6.2 Simplicity and Minimalism Previous work in MR such as most of the SemEval 2007 participants (Farkas et al., 2007; Nicolae et al., 2007; Leveling, 2007; Brun et al., 2007; Poibeau, 2007) and the more recent contributions used a selection of many of the following features/tools for classification: handmade trigger word lists, WordNet, VerbNet, FrameNet, extra features generated/learnt from parsing Wikipedia (approx 3B words) and BNC (approx 100M words), custom databases, handcrafted features, multiple (sometimes proprietary) parsers, Levin’s verb classes, 3,000 extra training instances from a corpus called MAScARA9 by Markert and Nissim (2002) and other extra resources including the SemEval Task 8 features. We managed to achieve comparable performance with a small neural network typically trained in no more than 5 epochs, minimal training data, a basic dependency parser and the new PreWin method by being highly discriminating in choosing signal over noise. 9http://homepages.inf.ed.ac.uk/mnissim/mascara/ 1255 7 Conclusions and Future Work We showed how a minimalist neural approach can replace substantial external resources, handcrafted features and how the PreWin method can even ignore most of the paragraph where the entity is positioned and still achieve competitive performance in metonymy resolution. The pressing new question is: “How much better the performance could have been if our method availed itself of the extra training data and resources used by previous works?” Indeed this may be the next research chapter for PreWin. We discussed how tasks such as Geographical Parsing can benefit from “metonymy-enhanced” NER tagging. We have also presented a case for better annotation guidelines for MR (after consulting with a number of linguists), which now means that a government is not a literal class, rather it is a metonymic one. We fully agreed with the rest of the previous annotation guidelines. We also introduced ReLocaR, a new corpus for (location) metonymy resolution and encourage researchers to make effective use of it (including the additional CoNLL 2003 subset we annotated for metonymy). Future work may involve testing PreWin on an NER task to see if and how it can generalise to a different classification task and how the results compare to the SOTA and similar methods such as that of Collobert et al. (2011) using the CoNLL 2003 NER datasets. Word Sense Disambiguation (Yarowsky, 2010; Pilehvar and Navigli, 2014) with neural networks (Melamud et al., 2016) is another related classification task suitable for testing PreWin. If it does perform better, this will be of considerable interest to classification research (and beyond) in NLP. Acknowledgments We gratefully acknowledge the funding support of the Natural Environment Research Council (NERC) PhD Studentship NE/M009009/1 (Milan Gritta, DREAM CDT), EPSRC (Nigel Collier and Nut Limsopatham - Grant No. EP/M005089/1) and MRC (Mohammad Taher Pilehvar) Grant No. MR/M025160/1 for PheneBank. References Marco Baroni, Georgiana Dinu, and Germ´an Kruszewski. 2014. Don’t count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In ACL (1). pages 238–247. Caroline Brun, Maud Ehrmann, and Guillaume Jacquet. 2007. XRCE-M: A hybrid system for named entity metonymy resolution. In Proceedings of the 4th International Workshop on Semantic Evaluations. pages 488–491. Razvan C Bunescu and Raymond J Mooney. 2005. A shortest path dependency kernel for relation extraction. In Proceedings of the conference on human language technology and empirical methods in natural language processing. pages 724–731. Franc¸ois Chollet. 2015. Keras. https://github. com/fchollet/keras. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12(Aug):2493–2537. Seana Coulson and Todd Oakley. 2003. Metonymy and conceptual blending. Pragmatics and beyond - new series pages 51–80. Li Dong, Furu Wei, Chuanqi Tan, Duyu Tang, Ming Zhou, and Ke Xu. 2014. Adaptive recursive neural network for target-dependent twitter sentiment classification. In ACL (2). pages 49–54. Rich´ard Farkas, Eszter Simon, Gy¨orgy Szarvas, and D´aniel Varga. 2007. Gyder: maxent metonymy resolution. In Proceedings of the 4th International Workshop on Semantic Evaluations. pages 161–164. J. R. Firth. 1957 1952-59:1–32. Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In Acoustics, speech and signal processing (icassp), 2013 ieee international conference on. IEEE, pages 6645–6649. Milan Gritta, Mohammad Taher Pilehvar, Nut Limsopatham, and Nigel Collier. 2017. What’s missing in geographical parsing? Language Resources and Evaluation pages 1–21. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Matthew Honnibal and Mark Johnson. 2015. An improved non-monotonic transition system for dependency parsing. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Lisbon, Portugal, pages 1373–1378. https://aclweb.org/anthology/D/D15/D15-1162. 1256 Mahesh Joshi and Carolyn Penstein-Ros´e. 2009. Generalizing dependency features for opinion mining. In Proceedings of the ACL-IJCNLP 2009 conference short papers. pages 313–316. Johannes Leveling. 2007. Fuh (fernuniversit¨at in hagen): Metonymy recognition using different kinds of context for a memory-based learner. In Proceedings of the 4th International Workshop on Semantic Evaluations. pages 153–156. David D Lewis, Yiming Yang, Tony G Rose, and Fan Li. 2004. Rcv1: A new benchmark collection for text categorization research. Journal of machine learning research 5(Apr):361–397. Yang Liu, Furu Wei, Sujian Li, Heng Ji, Ming Zhou, and Houfeng Wang. 2015. A dependency-based neural network for relation classification. arXiv preprint arXiv:1507.04646 . Yuan Luo, Aliyah R Sohani, Ephraim P Hochberg, and Peter Szolovits. 2014. Automatic lymphoma classification with sentence subgraph mining from pathology reports. Journal of the American Medical Informatics Association 21(5):824–832. Katja Markert and Malvina Nissim. 2002. Metonymy resolution as a classification task. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10. pages 204– 213. Katja Markert and Malvina Nissim. 2007. Semeval2007 task 08: Metonymy resolution at semeval2007. In Proceedings of the 4th International Workshop on Semantic Evaluations. pages 36–41. Katja Markert and Malvina Nissim. 2009. Data and models for metonymy resolution. Language Resources and Evaluation 43(2):123–138. Oren Melamud, Jacob Goldberger, and Ido Dagan. 2016. context2vec: Learning generic context embedding with bidirectional lstm. In Proceedings of CONLL. Gr´egoire Mesnil, Xiaodong He, Li Deng, and Yoshua Bengio. 2013. Investigation of recurrent-neuralnetwork architectures and learning methods for spoken language understanding. In Interspeech. pages 3771–3775. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, Curran Associates, Inc., pages 3111–3119. http://papers.nips.cc/paper/5021distributed-representations-of-words-and-phrasesand-their-compositionality.pdf. Bruno R Monteiro, Clodoveu A Davis, and Fred Fonseca. 2016. A survey on the geographic scope of textual documents. Computers & Geosciences 96:23– 34. Vivi Nastase, Alex Judea, Katja Markert, and Michael Strube. 2012. Local and global context for supervised and unsupervised metonymy resolution. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. pages 183–193. Vivi Nastase and Michael Strube. 2009. Combining collocations, lexical and encyclopedic knowledge for metonymy resolution. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2-Volume 2. pages 910–918. Vivi Nastase and Michael Strube. 2013. Transforming wikipedia into a large scale multilingual concept network. Artificial Intelligence 194:62–85. Cristina Nicolae, Gabriel Nicolae, and Sanda Harabagiu. 2007. Utd-hlt-cg: Semantic architecture for metonymy resolution and classification of nominal relations. In Proceedings of the 4th International Workshop on Semantic Evaluations. pages 454–459. Malvina Nissim and Katja Markert. 2003a. Syntactic features and word similarity for supervised metonymy resolution. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics-Volume 1. pages 56–63. Malvina Nissim and Katja Markert. 2003b. Syntactic features and word similarity for supervised metonymy resolution. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics-Volume 1. pages 56–63. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP). pages 1532– 1543. http://www.aclweb.org/anthology/D14-1162. Mohammad Taher Pilehvar and Roberto Navigli. 2014. A large-scale pseudoword-based evaluation framework for state-of-the-art word sense disambiguation. Computational Linguistics . Thierry Poibeau. 2007. Up13: Knowledge-poor methods (sometimes) perform poorly. In Proceedings of the 4th International Workshop on Semantic Evaluations. pages 418–421. Has¸im Sak, Andrew Senior, and Franc¸oise Beaufays. 2014. Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128 . 1257 Ekaterina Shutova, Jakub Kaplan, Simone Teufel, and Anna Korhonen. 2013. A computational model of logical metonymy. ACM Transactions on Speech and Language Processing (TSLP) 10(3):11. David Yarowsky. 2010. Word sense disambiguation. In Handbook of Natural Language Processing, Second Edition, Chapman and Hall/CRC, pages 315– 338. Jinchao Zhang, Fandong Meng, Mingxuan Wang, Daqi Zheng, Wenbin Jiang, and Qun Liu. 2016. Is local window essential for neural network based chinese word segmentation? In China National Conference on Chinese Computational Linguistics. Springer, pages 450–457. Wei Zhang and Judith Gelernter. 2015. Exploring metaphorical senses and word representations for identifying metonyms. arXiv preprint arXiv:1508.04515 . 1258 Figure 2: The neural architecture of the final model. The sentence is Vancouver is the host city of the ACL 2017. Small, separate sequential models are merged and trained as one. The 50-dimensional embeddings were initiated using GloVe. The right hand input is processed from right to left, the left hand input is processed from left to right. This is to emphasise the importance of the words closer to the entity. Figure 3: Why it is important for PreWin to always skip the conjunct dependency relation. Figure 4: A lot of irrelevant input is skipped such as “is” and “Peter Pan in an interview.”. Figure 5: By looking for the predicate window, the model skips many irrelevant words. 1259
2017
115
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1260–1272 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1116 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1260–1272 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1116 Unifying Text, Metadata, and User Network Representations with a Neural Network for Geolocation Prediction Yasuhide Miura†,‡ [email protected] Motoki Taniguchi† [email protected] Tomoki Taniguchi† [email protected] Tomoko Ohkuma† [email protected] †Fuji Xerox Co., Ltd. ‡Tokyo Institute of Technology Abstract We propose a novel geolocation prediction model using a complex neural network. Our model unifies text, metadata, and user network representations with an attention mechanism to overcome previous ensemble approaches. In an evaluation using two open datasets, the proposed model exhibited a maximum 3.8% increase in accuracy and a maximum of 6.6% increase in accuracy@161 against previous models. We further analyzed several intermediate layers of our model, which revealed that their states capture some statistical characteristics of the datasets. 1 Introduction Social media sites have become a popular source of information to analyze current opinions of numerous people. Many researchers have worked to realize various automated analytical methods for social media because manual analysis of such vast amounts of data is difficult. Geolocation prediction is one such analytical method that has been studied widely to predict a user location or a document location. Location information is crucially important information for analyses such as disaster analysis (Sakaki et al., 2010), disease analysis (Culotta, 2010), and political analysis (Tumasjan et al., 2010). Such information is also useful for analyses such as sentiment analysis (Mart´ınez-C´amara et al., 2014) and user attribute analysis (Rao et al., 2010) to undertake detailed region-specific analyses. Geolocation prediction has been performed for Wikipedia (Overell, 2009), Flickr (Serdyukov et al., 2009; Crandall et al., 2009), Facebook (Backstrom et al., 2010), and Twitter (Cheng et al., 2010; Eisenstein et al., 2010). Among these sources, Twitter is often preferred because of its characteristics, which are suited for geolocation prediction. First, some tweets include geotags, which are useful as ground truth locations. Secondly, tweets include metadata such as timezones and self-declared locations that can facilitate geolocation prediction. Thirdly, a user network is obtainable by consideration of the interaction between two users as a network link. Herein, we propose a neural network model to tackle geolocation prediction in Twitter. Past studies have combined text, metadata, and user network information with ensemble approaches (Han et al., 2013, 2014; Rahimi et al., 2015a; Jayasinghe et al., 2016) to achieve state-of-the-art performance. Our model combines text, metadata, and user network information using a complex neural network. Neural networks have recently shown effectiveness to capture complex representations combining simpler representations from large-scale datasets (Goodfellow et al., 2016). We intend to obtain unified text, metadata, and user network representations with an attention mechanism (Bahdanau et al., 2014) that is superior to the earlier ensemble approaches. The contributions of this paper are the following: 1. We propose a neural network model that learns unified text, metadata, and user network representations with an attention mechanism. 2. We show that the proposed model outperforms the previous ensemble approaches in two open datasets. 3. We analyze some components of the proposed model to gain insight into the unification processes of the model. Our model specifically emphasizes geolocation prediction in Twitter to use benefits derived from the characteristics described above. However, our 1260 model can be readily extended to other social media analyses such as user attribute analysis and political analysis, which can benefit from metadata and user network information. In subsequent sections of this paper, we explain the related works in four perspectives in Section 2. The proposed neural network model is described in Section 3 along with two open datasets that we used for evaluations in Section 4. Details of an evaluation are reported in Section 5 with discussions in Section 6. Finally, Section 7 concludes the paper with some future directions. 2 Related Works 2.1 Text-based Approach Probability distributions of words over locations have been used to estimate the geolocations of users. Maximum likelihood estimation approaches (Cheng et al., 2010, 2013) and language modeling approaches minimizing KL-divergence (Wing and Baldridge, 2011; Kinsella et al., 2011; Roller et al., 2012) have succeeded in predicting user locations using word distributions. Topic modeling approaches to extract latent topics with geographical regions (Eisenstein et al., 2010, 2011; Hong et al., 2012; Ahmed et al., 2013) have also been explored considering word distributions. Supervised machine learning methods with word features are also popular in text-based geolocation prediction. Multinomial Naive Bayes (Han et al., 2012, 2014; Wing and Baldridge, 2011), logistic regression (Wing and Baldridge, 2014; Han et al., 2014), hierarchical logistic regression (Wing and Baldridge, 2014), and a multilayer neural network with stacked denoising autoencoder (Liu and Inkpen, 2015) have realized geolocation prediction from text. A semi-supervised machine learning approach by Cha et al. (2015) has also been produced using a sparse-coding and dictionary learning. 2.2 User-network-based Approach Social media often include interactions of several kinds among users. These interactions can be regarded as links that form a network among users. Several studies have used such user network information to predict geolocation. Backstrom et al. (2010) introduced a probabilistic model to predict the location of a user using friendship information in Facebook. Friend and follower information in Twitter were used to predict user locations with a most frequent friend algorithm (Davis Jr. et al., 2011), a unified descriptive model (Li et al., 2012b), location-based generative models (Li et al., 2012a), dynamic Bayesian networks (Sadilek et al., 2012), a support vector machine (Rout et al., 2013), and maximum likelihood estimation (McGee et al., 2013). Mention information in Twitter is also used with label propagation models (Jurgens, 2013; Compton et al., 2014) and an energy and social local coefficient model (Kong et al., 2014). Jurgens et al. (2015) compared nine user-network-based approaches targeting Twitter, controlling data conditions. 2.3 Metadata-based Approach Metadata such as location fields are useful as effective clues to predict geolocation. Hecht et al. (2011) reported that decent accuracy of geolocation prediction can be achieved using location fields. Approaches to combine metadata with texts are also proposed to extend text-based approaches. Combinatory approaches such as a dynamically weighted ensemble method (Mahmud et al., 2012), polygon stacking (Schulz et al., 2013), stacking (Han et al., 2013, 2014), and average pooling with a neural network (Miura et al., 2016) have strengthened geolocation prediction. 2.4 Combinatory Approach Extending User-network-based Approach Several attempts have been made to combine usernetwork-based approaches with other approaches. A text-based approach with logistic regression was combined with label propagation approaches to enhance geolocation prediction (Rahimi et al., 2015a,b, 2016). Jayasinghe et al. (2016) combined nine components including text-based approaches, metadata-based approaches, and a usernetwork-based approach with a cascade ensemble method. 2.5 Comparisons with Proposed Model A model we propose in Section 3 which combines text, metadata, and user network information with a neural network, can be regarded as an alternative to approaches using text and metadata (Mahmud et al., 2012; Schulz et al., 2013; Han et al., 2013, 2014; Miura et al., 2016), approaches with text and user network information (Rahimi et al., 2015a,b), and an approach with text, metadata, and user network information (Jayasinghe et al., 2016). In Section 5, we demonstrate that our model outperforms earlier models. 1261 messages (timeline) RNNL AttentionL FCU label location description timezone Timezone Embedding AttentionTL RNND AttentionD RNNM AttentionM AttentionU Word Embedding linked cities linked users + AttentionN City Embedding User Embedding AttentionUN FCUN TEXT TEXT&META USERNET Figure 1: Overview of the proposed model. RNN denotes a recurrent neural network layer. FC denotes a fully connected layer. The striped layers are message-level processes. ⊕represents element-wise addition. In terms of machine learning methods, our model is a neural network model that shares some similarity with previous neural network models (Liu and Inkpen, 2015; Miura et al., 2016). Our model and these previous models have two key differences. First, our model integrates user network information along with other information. Secondly, our model combines text and metadata with an attention mechanism (Bahdanau et al., 2014). 3 Model 3.1 Proposed Model Figure 1 presents an overview of our model: a complex neural network for classification with a city as a label. For each user, the model accepts inputs of messages, a location field, a description field, a timezone, linked users, and the cities of linked users. User network information is incorporated by city embeddings and user embeddings of linked users. User embeddings are introduced along with city embeddings because linked users with city information1 are limited. We chose to let the model learn geolocation representations of linked users directly via user embeddings. The model can be 1City information are provided by a dataset. The detail of the city information is explained in Section 4. broken down to several components, details of which are described in Section 3.1.1–3.1.4. 3.1.1 Text Component We describe the text component of the model, which is the “TEXT” section in Figure 1. Figure 2 presents an overview of the text component. The component consists of a recurrent neural network (RNN) (Graves, 2012) layer and attention layers. An input of the component is a timeline of a user, which consists of messages in a time sequence. As an implementation of RNN, we used Gated Recurrent Unit (GRU) (Cho et al., 2014) with a bidirectional setting. In the RNN layer, word embeddings x of a message are processed with the following transition functions: zt = σ (W zxt + U zht−1 + bz) (1) rt = σ (W rxt + U rht−1 + br) (2) ˜ht = tanh (W hxt + U h (rt ⊙ht−1) + bh) (3) ht = (1 −zt) ⊙ht−1 + zt ⊙˜ht (4) where zt is an update gate, rt is a reset gate, ˜ht is a candidate state, ht is a state, W z, W r, W h, U z, U r, U h are weight matrices, bz, br, bh are bias vectors, σ is a logistic sigmoid function, and ⊙is an element-wise multiplication operator. The bi-directional GRU outputs −→ h 1262 messages (timeline) timeline representation AttentionTL RNNM AttentionM Word Embedding x1 xT … input h1 bi-directional recurrent states … g1 g2 gT RNN features … x2 u1 context vectors + … α1g1 α2g2 αTgT Attention features m u2 uT Attention Layer RNN Layer h1 h2 h2 hT hT Figure 2: Overview of the text component with detailed description of RNNM and AttentionM. and ←− h are concatenated to form g where gt = −→ ht∥←− ht and are passed to the first attention layer AttentionM. AttentionM computes a message representation m as a weighted sum of gt with weight αt: m = ∑ t αtgt (5) αt = exp ( vT αut ) ∑ t exp (vTαut) (6) ut = tanh (W αgt + bα) (7) where vα is a weight vector, W α is a weight matrix, and bα a bias vector. ut is an attention context vector calculated from gt with a single fullyconnected layer (Eq. 7). ut is normalized with softmax to obtain αt as a probability (Eq. 6). The message representation m is passed to the second attention layer AttentionTL to obtain a timeline representation from message representations. 3.1.2 Text and Metadata Component We describe text and metadata components of the model, which is the “TEXT&META” section in Figure 1. This component considers the following three types of metadata along with text: location a text field in which a user is allowed to write the user location freely, description a text field a user can use for self-description, and timezone a selective field from which a user can choose a timezone. Note that certain percentages of these fields are not available2, and unknown tokens are used for inputs in such cases. 2Han et al. (2014) reported missing percentages of 19% for location, 24% for description, and 25%for timezone. linked cities + AttentionN City Embedding User Embedding user network representation linked users linked user 1 linked user N current user User Network … c1 cN … inputs p1 p2 pN … c2 u1 context vectors + … α1p1 α2p2 αNpN Attention features m u2 uN Attention Layer a1 aN … a2 + + + Figure 3: Overview of the user network component with a detailed description of the elementwise addition and AttentionN. We process location fields and description fields similarly to messages using an RNN layer and an attention layer. Because there is only one location and one description per user, a second attention layer is not required, as it is in the text component. We also chose to share word embeddings among the messages, the location, and the description processes because these inputs are all textual information. For the timezone, an embedding is assigned for each timezone value. A processed timeline representation, a location representation, and a description representation are then passed to the attention layer AttentionU with a timezone representation. AttentionU combines these four representations and outputs a user representation. This combination is done as in AttentionTL with four representations as g1 . . . g4 in Eq. 5. 3.1.3 User Network Component We describe the user network component of the model, which is the “USERNET” section in Figure 1. Figure 3 presents an overview of the user network component. The model has two inputs linked cities and linked users. Users connected with a user network are extracted as linked users. We treat their cities3 as linked cities. Linked cities and linked users are assigned with city embeddings c and user embeddings a respectively. c and a are then processed to output p = c ⊕a, where ⊕is an element-wise addition operator. p is then passed to the subsequent attention layer AttentionN to obtain a user network representa3A user with city information implies that the user is included in a training set. 1263 TwitterUS (train) W-NUT (train) #user 279K 782K #tweet 23.8M 9.03M tweet/user 85.6 11.6 #edge 3.69M 3.21M #reduced-edge 2.11M 1.01M reduced-edge/user 7.04 1.29 #city 339 3028 Table 1: Some properties of TwitterUS (train) and W-NUT (train). We were able to obtain approximately 70–78% of the full datasets because of accessibility changes in Twitter. tion as in AttentionU. 3.1.4 Model Output An output of the text and metadata component and an output of the mention network component are further passed to the final attention layer AttentionUN to obtain a merged user representation as in AttentionU. The merged user representation is then connected to labels with a fully connected layer FCUN. 3.2 Sub-models of the Proposed Model SUB-NN-TEXT We prepare a sub-model SUBNN-TEXT by adding FCU and FCUN to the text component. This sub-model can be considered as a variant of a neural network model by Yang et al. (2016), which learns a representation of hierarchical text. SUB-NN-UNET We prepare a sub-model SUBNN-UNET by connecting the text component and the user network component with FCU, AttentionUN, and FCUN. This model can be regarded as a model that uses text and user network information. SUB-NN-META We prepare a sub-model SUBNN-META by adding FCU and FCUN to the metadata component. This model is a text-metabased model that uses text and metadata. 4 Data 4.1 Dataset Specifications TwitterUS The first dataset we used is TwitterUS assembled by Roller et al. (2012), which consists of 429K training users, 10K development users, and 10K test users in a North American region. The ground truth location of a user is set to the first geotag of the user in the dataset. We collected TwitterUS tweets using TwitterAPI to reconstruct TwitterUS to obtain metadata along with text. Up to date versions in November–December 2016 were used for the metadata4. We additionally assigned city centers to ground truth geotags using the city category of Han et al. (2012) to make city prediction possible in this dataset. TwitterUS (train) in Table 1 presents some properties related to the TwitterUS training set. W-NUT The second dataset we used is W-NUT, a user-level dataset of the geolocation prediction shared task of W-NUT 2016 (Han et al., 2016). The dataset consists of 1M training users, 10K development users, and 10K test users. The ground truth location of a user is decided by majority voting of the closest city center. Like in TwitterUS, we obtained metadata and texts using TwitterAPI. Up to date versions in August–September 2016 were used for the metadata. W-NUT (train) in Table 1 presents some properties related to the WNUT training set. 4.2 Construction of the User Network We construct mention networks (Jurgens, 2013; Compton et al., 2014; Rahimi et al., 2015a,b) from datasets as user networks. To do so, we follow the approach of Rahimi et al. (2015a) and Rahimi et al. (2015b) who use uni-directional mention to set edges of a mention network. An edge is set between the two users nodes if a user mentions another user. The number of unidirectional mention edges for TwitterUS and WNUT can be found in Table 1. The uni-directional setting results to large numbers of edges, which often are computationally expensive to process. We restricted edges to satisfy one of the following conditions to reduce the size: (1) both users have ground truth locations or (2) one user has a ground truth location and another user is mentioned 5 times or more in a training set. The number of reduced-edges with these conditions in TwitterUS and W-NUT can be confirmed in Table 1. 5 Evaluation 5.1 Implemented Baselines 5.1.1 LR LR is an l1-regularized logistic regression model with k-d tree regions (Roller et al., 2012) used 4TwitterAPI returns the current version of metadata even for an old tweet. 1264 in Rahimi et al. (2015a). The model uses tfidf weighted bag-of-words unigrams for features. This model is simple, but it has shown state-ofthe-art performance in cases when only text is available. 5.1.2 MADCEL-B-LR MADCEL-B-LR, a model presented by (Rahimi et al., 2015a), combines LR with Modified Adsorption (MAD) (Talukdar and Crammer, 2009). MAD is a graph-based label propagation algorithm that optimizes an objective with a prior term, a smoothness term, and an uninformativeness term. LR is combined with MAD by introducing LR results as dongle nodes to MAD. This model includes an algorithm for the construction of a mention network. The algorithm removes celebrity users5 and collapses a mention network6. We use binary edges for user network edges because they performed slightly better than weighted edges by accuracy@161 metric in Rahimi et al. (2015a). 5.1.3 LR-STACK LR-STACK is an ensemble learning model that combines four LR classifiers (LR-MSG, LR-LOC, LR-DESC, LR-TZ) with an l2-regularized logistic regression meta-classifier (LR-2ND). LR-MSG, LR-LOC, LR-DESC, and LR-TZ respectively use messages, location fields, description fields, and timezones as their inputs. This model is similar to the stacking (Wolpert, 1992) approach taken in Han et al. (2013) and Han et al. (2014), which showed superior performance compared to a feature concatenation approach. The model takes the following three steps to combine text and metadata: Step 1 LR-MSG, LRLOC, LR-DESC, and LR-TZ are trained using a training set, Step 2 the outputs of the four classifiers on the training set are obtained with 10-fold cross validation, and Step 3 LR-2ND is trained using the outputs of the four classifiers. 5.1.4 MADCEL-B-LR-STACK MADCEL-B-LR-STACK is a combined model of MADCEL-B-LR and LR-STACK. LR-STACK results are introduced as dongle nodes to MAD instead of LR results to combine text, metadata, and network information. 5Users with more than t unique mentions. 6Users not included in training users or test users are removed and disconnected edges with the removals are converted to direct edges. 5.2 Model Configurations 5.2.1 Text Processor We applied a lower case conversion, a unicode normalization, a Twitter user name normalization, and a URL normalization for text pre-processing. The pre-processed text is then segmented using Twokenizer (Owoputi et al., 2013) to obtain words. 5.2.2 Pre-training of Embeddings We pre-trained word embeddings using messages, location fields, and description fields of a training set using fastText (Bojanowski et al., 2016) with the skip-gram algorithm. We also pre-trained user embeddings using the non-reduced mention network described in Section 4.2 of a training set with LINE (Tang et al., 2015). The detail of pre-training parameters are described in Appendix A.1. 5.2.3 Neural Network Optimization We chose an objective function of our models to cross-entropy loss. l2 regularization was applied to the RNN layers, the attention context vectors, and the FC layers of our models to avoid overfitting. The objective function was minimized through stochastic gradient descent over shuffled mini-batches with Adam (Kingma and Ba, 2014). 5.2.4 Model Parameters The layers and the embeddings in our models have unit size and embedding dimension parameters. Our models and the baseline models have regularization parameter α, which is sensitive to a dataset. The baseline models have additional k-d tree bucket size c, celebrity threshold t, and MAD parameters µ1, µ2, and µ3, which are also data sensitive. We chose optimal values for these parameters in terms of accuracy with a grid search using the development sets of TwitterUS and W-NUT. Details of the parameter selection strategies and the selected values are described in Appendix A.2. 5.2.5 Metrics We evaluate the models in the following four commonly used metrics in geolocation prediction: accuracy the percentage of correctly predicted cities, accuracy@161 a relaxed accuracy that takes prediction errors within 161 km as correct predictions, median error distance median value of error distances in predictions, and mean error distance mean value of error distances in predictions. 1265 Model Sign. Test ID Accuracy Accuracy @161 Error Distance Median Mean Baselines (reported) Han et al. (2012) Wing and Baldridge (2014) LR (Rahimi et al. 2015b) LR-NA (Rahimi et al. 2016) MADCEL-B-LR (Rahimi et al. 2015a) MADCEL-W-LR (Rahimi et al. 2015a) 26.0 - - - - - 45.0 49.2 50 51 60 60 260 170.5 159 148 77 78 814 703.6 686 636 533 529 Baselines (implemented) LR MADCEL-B-LR LR-STACK MADCEL-B-LR-STACK i ii iii iv 42.0 50.2 50.8 55.7 52.7 60.1 64.1 67.7 121.1 66.5 42.3* 45.1 666.6 582.8 427.7 412.7 Our Models SUB-NN-TEXT SUB-NN-UNET SUB-NN-META Proposed Model i ii iii iv 44.9** 51.0 54.6** 58.5** 55.6** 61.5* 67.2** 70.1** 110.5 65.0 46.8 41.9* 585.1** 481.5** 356.3** 335.7** Table 2: Performances of our models and the baseline models on TwitterUS. Significance tests were performed between models with same Sign. Test IDs. The shaded lines represent values copied from related papers. Asterisks denote significant improvements against paired counterparts with 1% confidence (**) and 5% confidence (*). Model Sign. Test ID Accuracy Accuracy @161 Error Distance Median Mean Baselines (reported) Miura et al. (2016) Jayasinghe et al. (2016) 47.6 52.6 - - 16.1 21.7 1122.3 1928.8 Baselines (implemented) LR MADCEL-B-LR LR-STACK MADCEL-B-LR-STACK i ii iii iv 34.1 36.2 51.2 51.6 46.7 49.7 64.9 65.3 248.7 166.3 0.0 0.0 2216.4 2120.6 1496.4 1471.9 Our Models SUB-NN-TEXT SUB-NN-UNET SUB-NN-META Proposed Model i ii iii iv 35.4** 38.1** 54.7** 56.4** 50.3** 53.3** 70.2** 71.9** 155.8** 99.9** 0.0 0.0 1592.6** 1498.6** 825.8** 780.5** Table 3: Performance of our models and baseline models on W-NUT. The same notations as those in Table 2 are used in this table. 5.3 Result Performance on TwitterUS Table 2 presents results of our models and the implemented baseline models on TwitterUS. We also list values from earlier reports (Han et al., 2012; Wing and Baldridge, 2014; Rahimi et al., 2015a,b, 2016) to make our results readily comparable with past reported values. We performed some statistical significance tests among model pairs that share the same inputs. The values in the Sign. Test ID column of Table 2 represent the IDs of these pairs. As a preparation of statistical significance tests, accuracies, accuracy@161s, and error distances of each test user were calculated for each model pair. Twosided Fisher-Pittman Permutation tests were used for testing accuracy and accuracy@161. Mood’s median test was used for testing error distance in terms of median. Paired t-tests were used for testing error distance in terms of mean. We confirmed the significance of improvements in accuracy@161 and mean distance error for all of our models. Three of our models also improved in terms of accuracy. Especially, the proposed model achieved a 2.8% increase in accuracy and a 2.4% increase in accuracy@161 against the counterpart baseline model MADCEL-B-LRSTACK. One negative result we found was the median error distance between SUB-NN-META and LR-STACK. The baseline model LR-STACK performed 4.5 km significantly better than our model. Performance on W-NUT Table 3 presents the results of our models and the implemented baseline models on W-NUT. As for TwitterUS, we listed values from Miura et al. (2016) and Jayasinghe et al. (2016). We tested the significance of these results in the same way as we did for TwitterUS. We confirmed significant improvement in the four metrics for all of our models. The proposed model achieved a 4.8% increase in accuracy and a 1266 description location timeline timezone Figure 4: Estimated probability density functions of the four representations in AttentionU. 6.6% increase in accuracy@161 against the counterpart baseline model MADCEL-B-LR-STACK. The accuracy is 3.8% higher against the previously reported best value (Jayasinghe et al., 2016) which combined texts, metadata, and user network information with an ensemble method. 6 Discussion 6.1 Analyses of Attention Probabilities 6.1.1 Unification Strategies In the evaluation, the proposed model has implicitly shown effectiveness at unifying text, metadata, and user network representations through improvements in the four metrics. However, details of the unification processes are not clear from the model outputs because they are merely the probabilities of estimated locations. To gain insight into the unification processes, we analyzed the states of two attention layers: AttentionU and AttentionUN in Figure 1. Figure 4 presents the estimated probability density functions (PDFs) of the four input representations for AttentionU. These PDFs are estimated with kernel density estimation from the development sets of TwitterUS and W-NUT, where all four representations are available. From the PDFs, it is apparent that the model assigns higher probabilities to time line representations than to other three representations in TwitterUS compared to W-NUT. This finding is reasonable because timelines in TwitterUS consist of more tweets (tweet/user in Table 1) and are likely to be more informative than in W-NUT. Figure 5 presents the estimated PDFs of user network representations for AttentionUN. These user network Figure 5: Estimated probability density functions of user network representations in AttentionUN. PDFs are estimated from the development sets of TwitterUS and W-NUT, where both input representations are available. Strong preference of network representation for TwitterUS against WNUT is found in the PDFs. This finding is intuitive because TwitterUS has substantially more user network edges (reduced-edge/user in Table 1) than W-NUT, which is likely to benefit more from user network information. 6.1.2 Attention Patterns We further analyzed the proposed model by clustering attention probabilities to capture typical attention patterns. For each user, we assigned six attention probabilities of AttentionU and AttentionUN as features for a clustering. A kmeans clustering was performed over these users with 9 clusters. The clustering clearly separated the users to 5 clusters for TwitterUS users and 4 clusters for W-NUT users. We extracted typical users of each cluster by selecting the closest users of the cluster centroids. Figure 6 shows a clustering result and the attention probabilities of these users. These attention probabilities can be considered as typical attention patterns of the proposed model and match with the previously estimated PDFs. For example, cluster 2 and 3 represent an attention pattern that processes users by balancing the representations of locations along with the representations of timelines. Additionally, the location probabilities in this pattern are in the right tail region of the location PDF. 6.2 Limitations of Proposed Model 6.2.1 City Prediction The evaluation produced improvements in most of our models in the four metrics. One exception we found was the median distance error between SUB-NN-META and LR-STACKING in TwitterUS. Because the median distance error of SUB-NN-META was quite low (46.8 km), we 1267 1 2 3 4 5 6 7 8 9 TwitterUS W-NUT Cluster ID Dataset Timeline Location Description Timezone User User Network 1 TwitterUS 0.843 0.082 0.040 0.035 0.359 0.641 2 W-NUT 0.517 0.317 0.081 0.085 0.732 0.268 3 TwitterUS 0.432 0.430 0.069 0.069 0.319 0.681 4 W-NUT 0.637 0.160 0.097 0.105 0.737 0.263 5 TwitterUS 0.593 0.219 0.114 0.075 0.230 0.770 6 TwitterUS 0.672 0.214 0.069 0.045 0.365 0.635 7 W-NUT 0.741 0.077 0.080 0.102 0.605 0.395 8 TwitterUS 0.766 0.099 0.068 0.067 0.222 0.778 9 W-NUT 0.800 0.067 0.056 0.078 0.730 0.270 Figure 6: A k-means clustering result and the attention probabilities of users that are closest to the cluster centroids. The underlined values are the max values of the two datasets for each column. Model Error Distance Median Mean σ Oracle 23.3 31.4 30.1 Table 4: Error distance values in TwitterUS with oracle predictions. σ in the table denotes the standard deviation. measured the performance of an oracle model where city predictions are all correct (accuracy of 100%) in the test set. Table 4 denotes this oracle performance. The oracle mean error distance is 31.4 km. Its standard deviation is 30.1. Note that ground truth locations of TwitterUS are geotags and will not exactly match the oracle city centers. These oracle values imply that the current median error distances are close to the lower bound of the city classification approach and that they are difficult to improve. 6.2.2 Errors with High Confidences The proposed model still contains 28–30% errors even in accuracy@161. A qualitative analysis of errors with high confidences was performed to investigate cases that the model fails. We found two common types of error in the error analysis. The first is a case when a location field is incorrect due to a reason such as a house move. For example, the model predicted “Hong Kong” for a user with a location field of “Hong Kong” but has the gold location of “Toronto”. The second is a case when a user tweets a place name of a travel. For example, the model predicted “San Francisco” for a user who tweeted about a travel to “San Francisco” but has the gold location of “Boston”. These two types of error are difficult to handle with the current architecture of the proposed model. The architecture only supports single location field which disables the model to track location changes. The architecture also treats each tweet independently which forbids the model to express a temporal state like traveling. 7 Conclusion As described in this paper, we proposed a complex neural network model for geolocation prediction. The model unifies text, metadata, and user network information. The model achieved the maximum of a 3.8% increase in accuracy and a maximum of 6.6% increase in accuracy@161 against several previous state-of-the-art models. We further analyzed the states of several attention layers, which revealed that the probabilities assigned to timeline representations and user network representations match to some statistical characteristics of datasets. As future works of this study, we are planning to expand the proposed model to handle multiple locations and a temporal state to capture location changes and states like traveling. Additionally, we plan to apply the proposed model to other social media analyses such as gender analysis and age analysis. In these analyses, metadata like location fields and timezones may not be effective like in geolocation prediction. However, a user network is known to include various user attributes information including gender and age (McPherson et al., 2001) which suggests the unification of text and user network information to result in a success as in geolocation prediction. Acknowledgments We would like to thank the members of Okumura– Takamura Group at Tokyo Institute of Technology for having insightful discussions about user profiling models in social media. We would also like to thank the anonymous reviewer for their comments to improve this paper. 1268 References Amr Ahmed, Liangjie Hong, and Alexander J. Smola. 2013. Hierarchical geographical modeling of user locations from social media posts. In Proceedings of the 22nd International Conference on World Wide Web. pages 25–36. Lars Backstrom, Eric Sun, and Cameron Marlow. 2010. Find me if you can: Improving geographical prediction with social and spatial proximity. In Proceedings of the 19th International Conference on World Wide Web. pages 61–70. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. Computing Research Repository abs/1409.0473. http://arxiv.org/abs/1409.0473. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. arXiv preprint arXiv:1607.04606 . Miriam Cha, Youngjune Gwon, and H. T. Kung. 2015. Twitter geolocation and regional classification via sparse coding. In Proceedings of the Ninth International AAAI Conference on Web and Social Media. Zhiyuan Cheng, James Caverlee, and Kyumin Lee. 2010. You are where you tweet: A content-based approach to geo-locating Twitter users. In Proceedings of the 19th ACM International Conference on Information and Knowledge Management. pages 759– 768. Zhiyuan Cheng, James Caverlee, and Kyumin Lee. 2013. A content-driven framework for geolocating microblog users. ACM Transactions on Intelligent Systems and Technology 4(1):1–27. Article 2. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. pages 1724–1734. Ryan Compton, David Jurgens, and David Allen. 2014. Geotagging one hundred million Twitter accounts with total variation minimization. In Proceedings of the 2014 IEEE International Conference on BigData. pages 393–401. David J. Crandall, Lars Backstrom, Daniel Huttenlocher, and Jon Kleinberg. 2009. Mapping the world’s photos. In Proceedings of the 18th International Conference on World Wide Web. pages 761– 770. Aron Culotta. 2010. Towards detecting influenza epidemics by analyzing Twitter messages. In Proceedings of the First Workshop on Social Media Analytics. pages 115–122. Clodoveu A. Davis Jr., Gisele L. Pappa, Diogo Renn´o Rocha de Oliveira, and Filipe de L. Arcanjo. 2011. Inferring the location of Twitter messages based on user relationships. Transactions in GIS 15(6):735–751. Jacob Eisenstein, Amr Ahmed, and Eric P. Xing. 2011. Sparse additive generative models of text. In Proceedings of the 28th International Conference on Machine Learning. pages 1041–1048. Jacob Eisenstein, Brendan O’Connor, Noah A. Smith, and Eric P. Xing. 2010. A latent variable model for geographic lexical variation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. pages 1277–1287. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep Learning. MIT Press. Alex Graves. 2012. Supervised Sequence Labelling with Recurrent Neural Networks, volume 385 of Studies in Computational Intelligence. SpringerVerlag Berlin Heidelberg. Bo Han, Paul Cook, and Timothy Baldwin. 2012. Geolocation prediction in social media data by finding location indicative words. In Proceedings of COLING 2012. pages 1045–1062. Bo Han, Paul Cook, and Timothy Baldwin. 2013. A stacking-based approach to twitter user geolocation prediction. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations. pages 7–12. Bo Han, Paul Cook, and Timothy Baldwin. 2014. Textbased Twitter user geolocation prediction. Journal of Artificial Intelligence Research 49(1):451–500. Bo Han, Afshin Rahimi, Leon Derczynski, and Timothy Baldwin. 2016. Twitter geolocation prediction shared task of the 2016 workshop on noisy usergenerated text. In Proceedings of the Second Workshop on Noisy User-generated Text. pages 213–217. Brent Hecht, Lichan Hong, Bongwon Suh, and Ed H. Chi. 2011. Tweets from Justin Bieber’s heart: the dynamics of the location field in user profiles. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. pages 237–246. Liangjie Hong, Amr Ahmed, Siva Gurumurthy, Alexander J. Smola, and Kostas Tsioutsiouliklis. 2012. Discovering geographical topics in the Twitter stream. In Proceedings of the 21st International Conference on World Wide Web. pages 769–778. Gaya Jayasinghe, Brian Jin, James Mchugh, Bella Robinson, and Stephen Wan. 2016. CSIRO Data61 at the WNUT geo shared task. In Proceedings of the Second Workshop on Noisy User-generated Text. pages 218–226. 1269 David Jurgens. 2013. That’s what friends are for: Inferring location in online social media platforms based on social relationships. In Proceedings of the Seventh International AAAI Conference on Web and Social Media. David Jurgens, Tyler Finethy, James McCorriston, Yi Xu, and Derek Ruths. 2015. Geolocation prediction in Twitter using social networks: A critical analysis and review of current practice. In Proceedings of the Ninth International AAAI Conference on Web and Social Media. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . Sheila Kinsella, Vanessa Murdock, and Neil O’Hare. 2011. ”I’m eating a sandwich in Glasgow”: Modeling locations with tweets. In Proceedings of the Third International Workshop on Search and Mining User-generated Contents. pages 61–68. Longbo Kong, Zhi Liu, and Yan Huang. 2014. SPOT: Locating social media users based on social network context. Proceedings of the VLDB Endowment 7(13):1681–1684. Rui Li, Shengjie Wang, and Kevin Chen-Chuan Chang. 2012a. Multiple location profiling for users and relationships from social network and content. Proceedings of the VLDB Endowment 5(11):1603–1614. Rui Li, Shengjie Wang, Hongbo Deng, Rui Wang, and Kevin Chen-Chuan Chang. 2012b. Towards social user profiling: Unified and discriminative influence model for inferring home locations. In Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. pages 1023–1031. Ji Liu and Diana Inkpen. 2015. Estimating user location in social media with stacked denoising autoencoders. In Proceedings of the First Workshop on Vector Space Modeling for Natural Language Processing. pages 201–210. Jalal Mahmud, Jeffrey Nichols, and Clemens Drews. 2012. Where is this tweet from? Inferring home locations of Twitter users. In Proceedings of the Sixth International AAAI Conference on Weblogs and Social Media. Eugenio Mart´ınez-C´amara, Maria Teresa Mart´ınValdivia, Luis Alfonso Ure˜na L´opez, and Arturo Montejo Ra´ez. 2014. Sentiment analysis in Twitter. Natural Language Engineering 20(1):1–28. Jeffrey McGee, James Caverlee, and Zhiyuan Cheng. 2013. Location prediction in social media based on tie strength. In Proceedings of the 22nd ACM International Conference on Information & Knowledge Management. pages 459–468. Miller McPherson, Lynn Smith-Lovin, and James M Cook. 2001. Birds of a feather: Homophily in social networks. Annual review of sociology 27(1):415– 444. Yasuhide Miura, Motoki Taniguchi, Tomoki Taniguchi, and Tomoko Ohkuma. 2016. A simple scalable neural networks based model for geolocation prediction in Twitter. In Proceedings of the Second Workshop on Noisy User-generated Text. pages 235–239. Simon E. Overell. 2009. Geographic Information Retrieval: Classification, Disambiguation, and Modeling. Ph.D. thesis, Imperial College London. Olutobi Owoputi, Brendan O’Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A. Smith. 2013. Improved part-of-speech tagging for online conversational text with word clusters. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pages 380–390. Afshin Rahimi, Trevor Cohn, and Timothy Baldwin. 2015a. Twitter user geolocation using a unified text and network prediction model. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). pages 630–636. Afshin Rahimi, Trevor Cohn, and Timothy Baldwin. 2016. pigeo: A python geotagging tool. In Proceedings of ACL-2016 System Demonstrations. pages 127–132. Afshin Rahimi, Duy Vu, Trevor Cohn, and Timothy Baldwin. 2015b. Exploiting text and network context for geolocation of social media users. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pages 1362–1367. Delip Rao, David Yarowsky, Abhishek Shreevats, and Manaswi Gupta. 2010. Classifying latent user attributes in Twitter. In Proceedings of the Second International Workshop on Search and Mining Usergenerated Contents. pages 37–44. Stephen Roller, Michael Speriosu, Sarat Rallapalli, Benjamin Wing, and Jason Baldridge. 2012. Supervised text-based geolocation using language models on an adaptive grid. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. pages 1500–1510. Dominic Rout, Kalina Bontcheva, Daniel Preot¸iucPietro, and Trevor Cohn. 2013. Where’s @wally?: A classification approach to geolocating users based on their social ties. In Proceedings of the 24th ACM Conference on Hypertext and Social Media. pages 11–20. 1270 Adam Sadilek, Henry Kautz, and Jeffrey P. Bigham. 2012. Finding your friends and following them to where you are. In Proceedings of the Fifth ACM International Conference on Web Search and Data Mining. pages 723–732. Takeshi Sakaki, Makoto Okazaki, and Yutaka Matsuo. 2010. Earthquake shakes Twitter users: Real-time event detection by social sensors. In Proceedings of the 19th International Conference on World Wide Web. pages 851–860. Axel Schulz, Aristotelis Hadjakos, Heiko Paulheim, Johannes Nachtwey, and Max M¨uhlh¨auser. 2013. A multi-indicator approach for geolocalization of tweets. In Proceedings of the Seventh International AAAI Conference on Web and Social Media. Pavel Serdyukov, Vanessa Murdock, and Roelof van Zwol. 2009. Placing Flickr photos on a map. In Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval. pages 484–491. Partha Pratim Talukdar and Koby Crammer. 2009. New regularized algorithms for transductive learning. In Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part II. pages 442–457. Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. 2015. LINE: Large-scale information network embedding. In Proceedings of the 24th International Conference on World Wide Web. pages 1067–1077. Andranik Tumasjan, Timm O. Sprenger, Philipp G. Sandner, and Isabell M. Welpe. 2010. Predicting elections with Twitter: What 140 characters reveal about political sentiment. In Proceedings of the Fourth International AAAI Conference on Weblogs and Social Media. pages 178–185. Benjamin Wing and Jason Baldridge. 2011. Simple supervised document geolocation with geodesic grids. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. pages 955–964. Benjamin Wing and Jason Baldridge. 2014. Hierarchical discriminative classification for text-based geolocation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. pages 336–348. David H. Wolpert. 1992. Stacked generalization. Neural Networks 5(2):241–259. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pages 1480–1489. 1271 A Supplemental Materials A.1 Parameters of Embedding Pre-training Word embeddings were pre-trained with the parameters of learning rate=0.025, window size=5, negative sample size=5, and epoch=5. User embeddings were pre-trained with the parameters of initial learning rate=0.025, order=2, negative sample size=5, and training sample size=100M. A.2 Model Parameters and Parameter Selection Strategies Unit Sizes, Embedding Dimensions, and a Max Tweet Number The layers and the embeddings in our models have unit size and embedding dimension parameters. We also restricted the maximum number of tweets per user for TwitterUS to reduce memory footprints. Table 5 shows the values for these parameters. Smaller values were set for TwitterUS because TwitterUS is approximately 2.6 times larger in terms of tweet number. It was computationally expensive to process TwiiterUS in the same settings as W-NUT. Regularization Parameters and Bucket Sizes We chose optimal values of α using a grid search with the development sets of TwitterUS and WNUT. The range of α was set as the following: α ∈{1e−4, 5e−5, 1e−5, 5e−6, 1e−6, 5e−7, 1e−7, 5e−8, 1e−8}. We also chose optimal values of c using grid search with the development sets of TwitterUS and W-NUT for the baseline models. The range of c was set as the following for TwitterUS: c ∈{50, 100, 150, 200, 250, 300, 339}. The following was set for W-NUT: c ∈{100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 1500, 2000, 2500, 3000, 3028}. Table 6 presents selected values of α and c. For LR-STACK and MADCEl-B-LR-STACK, different parameters of α and c were selected for each logistic regression classifier. MAD Parameters and Celebrity Threshold The MAD parameters µ1, µ2, and µ3 and celebrity threshold t were also chosen using grid search with the development sets of TwitterUS and WNUT. The ranges of µ1, µ2, and µ3 were set as the following: µ1 ∈{1.0}, µ2 ∈{0.001, 0.01, 0.1, 1.0, 10.0}, µ3 ∈{0.0, 0.001, 0.01, 0.1, 1.0, 10.0}. The range of t for TwitterUS was set as t ∈ {2, . . . , 16}. The range of t for W-NUT was set TwitterUS W-NUT RNN unit size 100 200 Attention context vector size 200 400 FC unit size 200 400 Word embedding dimension 100 200 Timezone embedding dimension 200 400 City embedding dimension 200 400 User embedding dimension 200 400 Max tweet number per user 200 - Table 5: Unit sizes, embedding dimensions, and max tweet numbers of our models. Model Parameter TwitterUS W-NUT SUB-NN-TEXT α 1e-8 1e-7 SUB-NN-UNET 1e-6 5e-8 SUB-NN-META 1e-8 5e-8 Proposed Model 1e-6 5e-8 LR MADCEL-B-LR α 1e-6 5e-7 c 300 3000 LR-STACK MADCEL-B-LR-STACK αMSG 1e-6 5e-7 αLOC 1e-6 1e-6 αDESC 5e-6 1e-6 αTZ 1e-4 5e-6 α2ND 1e-6 1e-7 cMSG 300 3000 cLOC 300 3000 cDESC 250 1500 cTZ 100 2500 c2ND 300 2000 Table 6: Regularization parameters and bucket sizes selected for our models and baseline models. Model Parameter TwitterUS W-NUT MADCEL-B-LR µ1 1.0 1.0 µ2 1.0 10.0 µ3 0.01 0.1 t 5 4 MADCEL-B-LR-STACK µ1 1.0 1.0 µ2 1.0 1.0 µ3 0.1 0.0 t 4 2 Table 7: MAD parameters and celebrity threshold selected for baseline models. as t ∈{2, . . . , 6}. Table 6 presents selected values of µ1, µ2, µ3, and t. 1272
2017
116
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1273–1283 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1117 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1273–1283 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1117 Multi-Task Video Captioning with Video and Entailment Generation Ramakanth Pasunuru and Mohit Bansal UNC Chapel Hill {ram, mbansal}@cs.unc.edu Abstract Video captioning, the task of describing the content of a video, has seen some promising improvements in recent years with sequence-to-sequence models, but accurately learning the temporal and logical dynamics involved in the task still remains a challenge, especially given the lack of sufficient annotated data. We improve video captioning by sharing knowledge with two related directed-generation tasks: a temporally-directed unsupervised video prediction task to learn richer context-aware video encoder representations, and a logically-directed language entailment generation task to learn better video-entailing caption decoder representations. For this, we present a many-to-many multi-task learning model that shares parameters across the encoders and decoders of the three tasks. We achieve significant improvements and the new state-of-the-art on several standard video captioning datasets using diverse automatic and human evaluations. We also show mutual multi-task improvements on the entailment generation task. 1 Introduction Video captioning is the task of automatically generating a natural language description of the content of a video, as shown in Fig. 1. It has various applications such as assistance to a visually impaired person and improving the quality of online video search or retrieval. This task has gained recent momentum in the natural language processing and computer vision communities, esp. with the advent of powerful image processing features as well as sequence-to-sequence LSTM models. It Figure 1: A video captioning example from the YouTube2Text dataset, with the ground truth captions and our many-to-many multi-task model’s predicted caption. is also a step forward from static image captioning, because in addition to modeling the spatial visual features, the model also needs to learn the temporal across-frame action dynamics and the logical storyline language dynamics. Previous work in video captioning (Venugopalan et al., 2015a; Pan et al., 2016b) has shown that recurrent neural networks (RNNs) are a good choice for modeling the temporal information in the video. A sequence-to-sequence model is then used to ‘translate’ the video to a caption. Venugopalan et al. (2016) showed linguistic improvements over this by fusing the decoder with external language models. Furthermore, an attention mechanism between the video frames and the caption words captures some of the temporal matching relations better (Yao et al., 2015; Pan et al., 2016a). More recently, hierarchical two-level RNNs were proposed to allow for longer inputs and to model the full paragraph caption dynamics of long video clips (Pan et al., 2016a; Yu et al., 2016). Despite these recent improvements, video captioning models still suffer from the lack of sufficient temporal and logical supervision to be able to correctly capture the action sequence and storydynamic language in videos, esp. in the case of short clips. Hence, they would benefit from incorporating such complementary directed knowledge, both visual and textual. We address this by jointly training the task of video captioning with two related directed-generation tasks: a temporally1273 directed unsupervised video prediction task and a logically-directed language entailment generation task. We model this via many-to-many multi-task learning based sequence-to-sequence models (Luong et al., 2016) that allow the sharing of parameters among the encoders and decoders across the three different tasks, with additional shareable attention mechanisms. The unsupervised video prediction task, i.e., video-to-video generation (adapted from Srivastava et al. (2015)), shares its encoder with the video captioning task’s encoder, and helps it learn richer video representations that can predict their temporal context and action sequence. The entailment generation task, i.e., premise-to-entailment generation (based on the image caption domain SNLI corpus (Bowman et al., 2015)), shares its decoder with the video captioning decoder, and helps it learn better video-entailing caption representations, since the caption is essentially an entailment of the video, i.e., it describes subsets of objects and events that are logically implied by or follow from the full video content). The overall many-tomany multi-task model combines all three tasks. Our three novel multi-task models show statistically significant improvements over the state-ofthe-art, and achieve the best-reported results (and rank) on multiple datasets, based on several automatic and human evaluations. We also demonstrate that video captioning, in turn, gives mutual improvements on the new multi-reference entailment generation task. 2 Related Work Early video captioning work (Guadarrama et al., 2013; Thomason et al., 2014; Huang et al., 2013) used a two-stage pipeline to first extract a subject, verb, and object (S,V,O) triple and then generate a sentence based on it. Venugopalan et al. (2015b) fed mean-pooled static frame-level visual features (from convolution neural networks pre-trained on image recognition) of the video as input to the language decoder. To harness the important frame sequence temporal ordering, Venugopalan et al. (2015a) proposed a sequence-to-sequence model with video encoder and language decoder RNNs. More recently, Venugopalan et al. (2016) explored linguistic improvements to the caption decoder by fusing it with external language models. Moreover, an attention or alignment mechanism was added between the encoder and the decoder to learn the temporal relations (matching) between the video frames and the caption words (Yao et al., 2015; Pan et al., 2016a). In contrast to static visual features, Yao et al. (2015) also considered temporal video features from a 3D-CNN model pretrained on an action recognition task. To explore long range temporal relations, Pan et al. (2016a) proposed a two-level hierarchical RNN encoder which limits the length of input information and allows temporal transitions between segments. Yu et al. (2016)’s hierarchical RNN generates sentences at the first level and the second level captures inter-sentence dependencies in a paragraph. Pan et al. (2016b) proposed to simultaneously learn the RNN word probabilities and a visual-semantic joint embedding space that enforces the relationship between the semantics of the entire sentence and the visual content. Despite these useful recent improvements, video captioning still suffers from limited supervision and generalization capabilities, esp. given the complex action-based temporal and story-based logical dynamics that need to be captured from short video clips. Our work addresses this issue by bringing in complementary temporal and logical knowledge from video prediction and textual entailment generation tasks (respectively), and training them together via many-to-many multi-task learning. Multi-task learning is a useful learning paradigm to improve the supervision and the generalization performance of a task by jointly training it with related tasks (Caruana, 1998; Argyriou et al., 2007; Kumar and Daum´e III, 2012). Recently, Luong et al. (2016) combined multi-task learning with sequence-to-sequence models, sharing parameters across the tasks’ encoders and decoders. They showed improvements on machine translation using parsing and image captioning. We additionally incorporate an attention mechanism to this many-to-many multi-task learning approach and improve the multimodal, temporal-logical video captioning task by sharing its video encoder with the encoder of a video-to-video prediction task and by sharing its caption decoder with the decoder of a linguistic premise-to-entailment generation task. Image representation learning has been successful via supervision from very large object-labeled datasets. However, similar amounts of supervision are lacking for video representation learning. Srivastava et al. (2015) address this by propos1274 LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM Figure 2: Baseline sequence-to-sequence model for video captioning: standard encoder-decoder LSTM-RNN model. ing unsupervised video representation learning via sequence-to-sequence RNN models, where they reconstruct the input video sequence or predict the future sequence. We model video generation with an attention-enhanced encoder-decoder and harness it to improve video captioning. The task of recognizing textual entailment (RTE) is to classify whether the relationship between a premise and hypothesis sentence is that of entailment (i.e., logically follows), contradiction, or independence (neutral), which is helpful for several downstream NLP tasks. The recent Stanford Natural Language Inference (SNLI) corpus by Bowman et al. (2015) allowed training end-to-end neural networks that outperform earlier feature-based RTE models (Lai and Hockenmaier, 2014; Jimenez et al., 2014). However, directly generating the entailed hypothesis sentences given a premise sentence would be even more beneficial than retrieving or reranking sentence pairs, because most downstream generation tasks only come with the source sentence and not pairs. Recently, Kolesnyk et al. (2016) tried a sequenceto-sequence model for this on the original SNLI dataset, which is a single-reference setting and hence restricts automatic evaluation. We modify the SNLI corpus to a new multi-reference (and a more challenging zero train-test premise overlap) setting, and present a novel multi-task training setup with the related video captioning task (where the caption also entails a video), showing mutual improvements on both the tasks. 3 Models We first discuss a simple encoder-decoder model as a baseline reference for video captioning. Next, we improve this via an attention mechanism. Finally, we present similar models for the unsupervised video prediction and entailment generation tasks, and then combine them with video captioning via the many-to-many multi-task approach. 3.1 Baseline Sequence-to-Sequence Model Our baseline model is similar to the standard machine translation encoder-decoder RNN Figure 3: Attention-based sequence-to-sequence baseline model for video captioning (similar models also used for video prediction and entailment generation). model (Sutskever et al., 2014) where the final state of the encoder RNN is input as an initial state to the decoder RNN, as shown in Fig. 2. The RNN is based on Long Short Term Memory (LSTM) units, which are good at memorizing long sequences due to forget-style gates (Hochreiter and Schmidhuber, 1997). For video captioning, our input to the encoder is the video frame features1 {f1, f2, ..., fn} of length n, and the caption word sequence {w1, w2, ..., wm} of length m is generated during the decoding phase. The distribution of the output sequence w.r.t. the input sequence is: p(w1, ..., wm|f1, ..., fn) = m Y t=1 p(wt|hd t ) (1) where hd t is the hidden state at the tth time step of the decoder RNN, obtained from hd t−1 and wt−1 via the standard LSTM-RNN equations. The distribution p(wt|hd t ) is given by softmax over all the words in the vocabulary. 3.2 Attention-based Model Our attention model architecture is similar to Bahdanau et al. (2015), with a bidirectional LSTMRNN as the encoder and a unidirectional LSTMRNN as the decoder, see Fig. 3. At each time step t, the decoder LSTM hidden state hd t is a nonlinear recurrent function of the previous decoder hidden state hd t−1, the previous time-step’s generated word wt−1, and the context vector ct: hd t = S(hd t−1, wt−1, ct) (2) 1We use several popular image features such as VGGNet, GoogLeNet and Inception-v4. Details in Sec. 4.1. 1275 UNSUPERVISED VIDEO PREDICTION VIDEO CAPTIONING ENTAILMENT GENERATION Video Encoder Language Encoder Video Decoder Language Decoder LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM Figure 4: Our many-to-many multi-task learning model to share encoders and decoders of the video captioning, unsupervised video prediction, and entailment generation tasks. where ct is a weighted sum of encoder hidden states {he i}: ct = n X i=1 αt,ihe i (3) These attention weights {αt,i} act as an alignment mechanism by giving higher weights to certain encoder hidden states which match that decoder time step better, and are computed as: αt,i = exp(et,i) Pn k=1 exp(et,k) (4) where the attention function et,i is defined as: et,i = wT tanh(W e ahe i + W d a hd t−1 + ba) (5) where w, W e a, W d a , and ba are learned parameters. This attention-based sequence-to-sequence model (Fig. 3) is our enhanced baseline for video captioning. We next discuss similar models for the new tasks of unsupervised video prediction and entailment generation and then finally share them via multi-task learning. 3.3 Unsupervised Video Prediction We model unsupervised video representation by predicting the sequence of future video frames given the current frame sequence. Similar to Sec. 3.2, a bidirectional LSTM-RNN encoder and an LSTM-RNN decoder is used, along with attention. If the frame level features of a video of length n are {f1, f2, ..., fn}, these are divided into two sets such that given the current frames {f1, f2, .., fk} (in its encoder), the model has to predict (decode) the rest of the frames {fk+1, fk+2, .., fn}. The motivation is that this helps the video encoder learn rich temporal representations that are aware of their action-based context and are also robust to missing frames and varying frame lengths or motion speeds. The optimization function is defined as: minimize φ n−k X t=1 ||fd t −ft+k||2 2 (6) where φ are the model parameters, ft+k is the true future frame feature at decoder time step t and fd t is the decoder’s predicted future frame feature at decoder time step t, defined as: fd t = S(hd t−1, fd t−1, ct) (7) similar to Eqn. 2, with hd t−1 and fd t−1 as the previous time step’s hidden state and predicted frame feature respectively, and ct as the attentionweighted context vector. 3.4 Entailment Generation Given a sentence (premise), the task of entailment generation is to generate a sentence (hypothesis) which is a logical deduction or implication of the premise. Our entailment generation model again uses a bidirectional LSTM-RNN encoder and LSTM-RNN decoder with an attention mechanism (similar to Sec. 3.2). If the premise sp is a sequence of words {wp 1, wp 2, ..., wp n} and the hypothesis sh is {wh 1, wh 2, ..., wh m}, the distribution of the entailed hypothesis w.r.t. the premise is: p(wh 1, ..., wh m|wp 1, ..., wp n) = m Y t=1 p(wh t |hd t ) (8) where the distribution p(wh t |hd t ) is again obtained via softmax over all the words in the vocabulary and the decoder state hd t is similar to Eqn. 2. 1276 3.5 Multi-Task Learning Multi-task learning helps in sharing information between different tasks and across domains. Our primary aim is to improve the video captioning model, where visual content translates to a textual form in a directed (entailed) generation way. Hence, this presents an interesting opportunity to share temporally and logically directed knowledge with both visual and linguistic generation tasks. Fig. 4 shows our overall many-to-many multi-task model for jointly learning video captioning, unsupervised video prediction, and textual entailment generation. Here, the video captioning task shares its video encoder (parameters) with the encoder of the video prediction task (one-to-many setting) so as to learn context-aware and temporally-directed visual representations (see Sec. 3.3). Moreover, the decoder of the video captioning task is shared with the decoder of the textual entailment generation task (many-to-one setting), thus helping generate captions that can ‘entail’, i.e., are logically implied by or follow from the video content (see Sec. 3.4).2 In both the one-tomany and the many-to-one settings, we also allow the attention parameters to be shared or separated. The overall many-to-many setting thus improves both the visual and language representations of the video captioning model. We train the multi-task model by alternately optimizing each task in mini-batches based on a mixing ratio. Let αv, αf, and αe be the number of mini-batches optimized alternately from each of these three tasks – video captioning, unsupervised video future frames prediction, and entailment generation, resp. Then the mixing ratio is defined as αv (αv+αf+αe) : αf (αv+αf+αe) : αe (αv+αf+αe). 4 Experimental Setup 4.1 Datasets Video Captioning Datasets We report results on three popular video captioning datasets. First, we use the YouTube2Text or MSVD (Chen and Dolan, 2011) for our primary results, which con2Empirically, logical entailment helped captioning more than simple fusion with language modeling (i.e., partial sentence completion with no logical implication), because a caption also entails a video in a logically-directed sense and hence the entailment generation task matches the video captioning task better than language modeling. Moreover, a multi-task setup is more suitable to add directed information such as entailment (as opposed to pretraining or fusion with only the decoder). Details in Sec. 5.1. tains 1970 YouTube videos in the wild with several different reference captions per video (40 on average). We also use MSR-VTT (Xu et al., 2016) with 10, 000 diverse video clips (from a video search engine) – it has 200, 000 video clipsentence pairs and around 20 captions per video; and M-VAD (Torabi et al., 2015) with 49, 000 movie-based video clips but only 1 or 2 captions per video, making most evaluation metrics (except paraphrase-based METEOR) infeasible. We use the standard splits for all three datasets. Further details about all these datasets are provided in the supplementary. Video Prediction Dataset For our unsupervised video representation learning task, we use the UCF-101 action videos dataset (Soomro et al., 2012), which contains 13, 320 video clips of 101 action categories, and suits our video captioning task well because it also contains short video clips of a single action or few actions. We use the standard splits – further details in supplementary. Entailment Generation Dataset For the entailment generation encoder-decoder model, we use the Stanford Natural Language Inference (SNLI) corpus (Bowman et al., 2015), which contains human-annotated English sentence pairs with classification labels of entailment, contradiction and neutral. It has a total of 570, 152 sentence pairs out of which 190, 113 correspond to true entailment pairs, and we use this subset in our multi-task video captioning model. For improving video captioning, we use the same training/validation/test splits as provided by Bowman et al. (2015), which is 183, 416 training, 3, 329 validation, and 3, 368 testing pairs (for the entailment subset). However, for the entailment generation multitask results (see results in Sec. 5.3), we modify the splits so as to create a multi-reference setup which can afford evaluation with automatic metrics. A given premise usually has multiple entailed hypotheses but the original SNLI corpus is set up as single-reference (for classification). Due to this, the different entailed hypotheses of the same premise land up in different splits of the dataset (e.g., one in train and one in test/validation) in many cases. Therefore, we regroup the premiseentailment pairs and modify the split as follows: among the 190, 113 premise-entailment pairs subset of the SNLI corpus, there are 155, 898 unique premises; out of which 145, 822 have only one hy1277 pothesis and we make this the training set, and the rest of them (10, 076) have more than one hypothesis, which we randomly shuffle and divide equally into test and validation sets, so that each of these two sets has approximately the same distribution of the number of reference hypotheses per premise. These new validation and test sets hence contain premises with multiple entailed hypotheses as ground truth references, thus allowing for automatic metric evaluation, where differing generations still get positive scores by matching one of the multiple references. Also, this creates a more challenging dataset for entailment generation because of zero premise overlap between the training and val/test sets. We will make these split details publicly available. Pre-trained Visual Frame Features For the three video captioning and UCF-101 datasets, we fix our sampling rate to 3fps to bring uniformity in the temporal representation of actions across all videos. These sampled frames are then converted into features using several stateof-the-art pre-trained models on ImageNet (Deng et al., 2009) – VGGNet (Simonyan and Zisserman, 2015), GoogLeNet (Szegedy et al., 2015; Ioffe and Szegedy, 2015), and Inception-v4 (Szegedy et al., 2016). Details of these feature dimensions and layer positions are in the supplementary. 4.2 Evaluation (Automatic and Human) For our video captioning as well as entailment generation results, we use four diverse automatic evaluation metrics that are popular for image/video captioning and language generation in general: METEOR (Denkowski and Lavie, 2014), BLEU-4 (Papineni et al., 2002), CIDEr-D (Vedantam et al., 2015), and ROUGE-L (Lin, 2004). Particularly, METEOR and CIDEr-D have been justified to be better for generation tasks, because CIDEr-D uses consensus among the (large) number of references and METEOR uses soft matching based on stemming, paraphrasing, and WordNet synonyms. We use the standard evaluation code from the Microsoft COCO server (Chen et al., 2015) to obtain these results and also to compare the results with previous papers.3 We also present human evaluation results based 3We use avg. of these four metrics on validation set to choose the best model, except for single-reference M-VAD dataset where we only report and choose based on METEOR. on relevance (i.e., how related is the generated caption w.r.t. the video contents such as actions, objects, and events; or is the generated hypothesis entailed or implied by the premise) and coherence (i.e., a score on the logic, readability, and fluency of the generated sentence). 4.3 Training Details We tune all hyperparameters on the dev splits: LSTM-RNN hidden state size, learning rate, weight initializations, and mini-batch mixing ratios (tuning ranges in supplementary). We use the following settings in all of our models (unless otherwise specified): we unroll video encoder/decoder RNNs to 50 time steps and language encoder/decoder RNNs to 30 time steps. We use a 1024-dimension RNN hidden state size and 512-dim vectors to embed visual features and word vectors. We use Adam optimizer (Kingma and Ba, 2015). We apply a dropout of 0.5. See subsections below and supp for full details. 5 Results and Analysis 5.1 Video Captioning on YouTube2Text Table 1 presents our primary results on the YouTube2Text (MSVD) dataset, reporting several previous works, all our baselines and attention model ablations, and our three multi-task models, using the four automated evaluation metrics. For each subsection below, we have reported the important training details inline, and refer to the supplementary for full details (e.g., learning rates and initialization). Baseline Performance We first present all our baseline model choices (ablations) in Table 1. Our baselines represent the standard sequence-tosequence model with three different visual feature types as well as those with attention mechanisms. Each baseline model is trained with three random seed initializations and the average is reported (for stable results). The final baseline model ⊗instead uses an ensemble (E), which is a standard denoising method (Sutskever et al., 2014) that performs inference over ten randomly initialized models, i.e., at each time step t of the decoder, we generate a word based on the avg. of the likelihood probabilities from the ten models. Moreover, we use beam search with size 5 for all baseline models. Overall, the final baseline model with Inceptionv4 features, attention, and 10-ensemble performs 1278 Models METEOR CIDEr-D ROUGE-L BLEU-4 PREVIOUS WORK LSTM-YT (V) (Venugopalan et al., 2015b) 26.9 31.2 S2VT (V + A) (Venugopalan et al., 2015a) 29.8 Temporal Attention (G + C) (Yao et al., 2015) 29.6 51.7 41.9 LSTM-E (V + C) (Pan et al., 2016b) 31.0 45.3 Glove + DeepFusion (V) (E) (Venugopalan et al., 2016) 31.4 42.1 p-RNN (V + C) (Yu et al., 2016) 32.6 65.8 49.9 HNRE + Attention (G + C) (Pan et al., 2016a) 33.9 46.7 OUR BASELINES Baseline (V) 31.4 63.9 68.0 43.6 Baseline (G) 31.7 64.8 68.6 44.1 Baseline (I) 33.3 75.6 69.7 46.3 Baseline + Attention (V) 32.6 72.2 69.0 47.5 Baseline + Attention (G) 33.0 69.4 68.3 44.9 Baseline + Attention (I) 33.8 77.2 70.3 49.9 Baseline + Attention (I) (E) ⊗ 35.0 84.4 71.5 52.6 OUR MULTI-TASK LEARNING MODELS ⊗+ Video Prediction (1-to-M) 35.6 88.1 72.9 54.1 ⊗+ Entailment Generation (M-to-1) 35.9 88.0 72.7 54.4 ⊗+ Video Prediction + Entailment Generation (M-to-M) 36.0 92.4 72.8 54.5 Table 1: Primary video captioning results on Youtube2Text (MSVD), showing previous works, our several strong baselines, and our three multi-task models. Here, V, G, I, C, A are short for VGGNet, GoogLeNet, Inception-v4, C3D, and AlexNet visual features; E = ensemble. The multi-task models are applied on top of our best video captioning baseline ⊗, with an ensemble. All the multi-task models are statistically significant over the baseline (discussed inline in the corresponding results sections). well (and is better than all previous state-of-theart), and so we next add all our novel multi-task models on top of this final baseline. Multi-Task with Video Prediction (1-to-M) Here, the video captioning and unsupervised video prediction tasks share their encoder LSTM-RNN weights and image embeddings in a one-to-many multi-task setting. Two important hyperparameters tuned (on the validation set of captioning datasets) are the ratio of encoder vs decoder frames for video prediction on UCF-101 (where we found that 80% of frames as input and 20% for prediction performs best); and the mini-batch mixing ratio between the captioning and video prediction tasks (where we found 100 : 200 works well). Table 1 shows a statistically significant improvement4 in all metrics in comparison to the best baseline (non-multitask) model as well as w.r.t. all previous works, demonstrating the effectiveness of multi-task learning for video captioning with video prediction, even with unsupervised signals. Multi-Task with Entailment Generation (Mto-1) Here, the video captioning and entailment generation tasks share their language decoder LSTM-RNN weights and word embeddings in a many-to-one multi-task setting. We observe 4Statistical significance of p < 0.01 for CIDEr-D and ROUGE-L, p < 0.02 for BLEU-4, p < 0.03 for METEOR, based on the bootstrap test (Noreen, 1989; Efron and Tibshirani, 1994) with 100K samples. that a mixing ratio of 100 : 50 alternating minibatches (between the captioning and entailment tasks) works well here. Again, Table 1 shows statistically significant improvements5 in all the metrics in comparison to the best baseline model (and all previous works) under this multi-task setting. Note that in our initial experiments, our entailment generation model helped the video captioning task significantly more than the alternative approach of simply improving fluency by adding (or deep-fusing) an external language model (or pre-trained word embeddings) to the decoder (using both in-domain and out-of-domain language models), again because a caption also ‘entails’ a video in a logically-directed sense and hence this matches our captioning task better (also see results of Venugopalan et al. (2016) in Table 1). Multi-Task with Video and Entailment Generation (M-to-M) Combining the above one-tomany and many-to-one multi-task learning models, our full model is the 3-task, many-to-many model (Fig. 4) where both the video encoder and the language decoder of the video captioning model are shared (and hence improved) with that of the unsupervised video prediction and entailment generation models, respectively.6 A mixing ratio of 100 : 100 : 50 alternate mini-batches 5Statistical significance of p < 0.01 for all four metrics. 6We found the setting with unshared attention parameters to work best, likely because video captioning and video prediction prefer very different alignment distributions. 1279 Models M C R B Venugopalan (2015b)⋆ 23.4 32.3 Yao et al. (2015)⋆ 25.2 35.2 Xu et al. (2016) 25.9 36.6 Rank1: v2t navigator 28.2 44.8 60.9 40.8 Rank2: Aalto 26.9 45.7 59.8 39.8 Rank3: VideoLAB 27.7 44.1 60.6 39.1 Our Model (New Rank1) 28.8 47.1 60.2 40.8 Table 2: Results on MSR-VTT dataset on the 4 metrics. ⋆Results are reimplementations as per Xu et al. (2016). We also report the top 3 leaderboard systems – our model achieves the new rank 1 based on their ranking method. Models METEOR Yao et al. (2015) 5.7 Venugopalan et al. (2015a) 6.7 Pan et al. (2016a) 6.8 Our M-to-M Multi-Task Model 7.4 Table 3: Results on M-VAD dataset. of video captioning, unsupervised video prediction, and entailment generation, resp. works well. Table 1 shows that our many-to-many multi-task model again outperforms our strongest baseline (with statistical significance of p < 0.01 on all metrics), as well as all the previous state-of-theart results by large absolute margins on all metrics. It also achieves significant improvements on some metrics over the one-to-many and many-toone models.7 Overall, we achieve the best results to date on YouTube2Text (MSVD) on all metrics. 5.2 Video Captioning on MSR-VTT, M-VAD In Table 2, we also train and evaluate our final many-to-many multi-task model on two other video captioning datasets (using their standard splits; details in supplementary). First, we evaluate on the new MSR-VTT dataset (Xu et al., 2016). Since this is a recent dataset, we list previous works’ results as reported by the MSR-VTT dataset paper itself.8 We improve over all of these significantly. Moreover, they maintain a leaderboard9 on this dataset and we also report the top 3 systems from it. Based on their ranking method, our multi-task model achieves the new rank 1 on this leaderboard. In Table 3, we further evaluate our model on the challenging movie-based M-VAD dataset, and again achieve improvements over all previous work (Venugopalan et al., 2015a; 7Many-to-many model’s improvements have a statistical significance of p < 0.01 on all metrics w.r.t. baseline, and p < 0.01 on CIDEr-D w.r.t. both one-to-many and many-toone models, and p < 0.04 on METEOR w.r.t. one-to-many. 8In their updated supplementary at https: //www.microsoft.com/en-us/research/wp-content/ uploads/2016/10/cvpr16.supplementary.pdf 9http://ms-multimedia-challenge.com/leaderboard Models M C R B Entailment Generation 28.0 108.4 59.7 36.6 +Video Caption (M-to-1) 28.7 114.5 60.8 38.9 Table 4: Entailment generation results with the four metrics. Pan et al., 2016a; Yao et al., 2015).10 5.3 Entailment Generation Results Above, we showed that the new entailment generation task helps improve video captioning. Next, we show that the video captioning task also inversely helps the entailment generation task. Given a premise, the task of entailment generation is to generate an entailed hypothesis. We use only the entailment pairs subset of the SNLI corpus for this, but with a multi-reference split setup to allow automatic metric evaluation and a zero traintest premise overlap (see Sec. 4.1). All the hyperparameter details (again tuned on the validation set) are presented in the supplementary. Table 4 presents the entailment generation results for the baseline (sequence-to-sequence with attention, 3ensemble, beam search) and the multi-task model which uses video captioning (shared decoder) on top of the baseline. A mixing ratio of 100 : 20 alternate mini-batches of entailment generation and video captioning (resp.) works well.11 The multitask model achieves stat. significant (p < 0.01) improvements over the baseline on all metrics, thus demonstrating that video captioning and entailment generation both mutually help each other. 5.4 Human Evaluation In addition to the automated evaluation metrics, we present pilot-scale human evaluations on the YouTube2Text (Table 1) and entailment generation (Table 4) results. In each case, we compare our strongest baseline with our final multi-task model by taking a random sample of 200 generated captions (or entailed hypotheses) from the test set and removing the model identity to anonymize the two models, and ask the human evaluator to choose the better model based on relevance and coherence (described in Sec. 4.2). As shown in Table 5, the multi-task models are always better than the strongest baseline for both video captioning and entailment generation, on both relevance 10Following previous work, we only use METEOR because M-VAD only has a single reference caption per video. 11Note that this many-to-one model prefers a different mixing ratio and learning rate than the many-to-one model for improving video captioning (Sec. 5.1), because these hyperparameters depend on the primary task being improved, as also discussed in previous work (Luong et al., 2016). 1280 (a) (b) (c) Figure 5: Examples of generated video captions on the YouTube2Text dataset: (a) complex examples where the multi-task model performs better than the baseline; (b) ambiguous examples (i.e., ground truth itself confusing) where multi-task model still correctly predicts one of the possible categories (c) complex examples where both models perform poorly. YouTube2Text Entailment Relev. Coher. Relev. Coher. Not Distinguish. 65.0% 93.0% 73.5% 94.5% Baseline Wins 14.0% 1.0% 12.5% 1.5% Multi-Task Wins 21.0% 6.0% 15.0% 4.0% Table 5: Human evaluation on captioning and entailment. Given Premise Generated Entailment a man on stilts is playing a tuba for money on the boardwalk a man is playing an instrument a girl looking through a large telescope on a school trip a girl is looking at something several young people sit at a table playing poker people are playing a game the stop sign is folded up against the side of the bus the sign is not moving a blue and silver monster truck making a huge jump over crushed cars a truck is being driven Table 6: Examples of our multi-task model’s generated entailment hypotheses given a premise. and coherence, and with similar improvements (27%) as the automatic metrics (shown in Table 1). 5.5 Analysis Fig. 5 shows video captioning generation results on the YouTube2Text dataset where our final M-to-M multi-task model is compared with our strongest attention-based baseline model for three categories of videos: (a) complex examples where the multi-task model performs better than the baseline; (b) ambiguous examples (i.e., ground truth itself confusing) where multi-task model still correctly predicts one of the possible categories (c) complex examples where both models perform poorly. Overall, we find that the multi-task model generates captions that are better at both temporal action prediction and logical entailment (i.e., correct subset of full video premise) w.r.t. the ground truth captions. The supplementary also provides ablation examples of improvements by the 1-to-M video prediction based multi-task model alone, as well as by the M-to-1 entailment based multi-task model alone (over the baseline). On analyzing the cases where the baseline is better than the final M-to-M multi-task model, we find that these are often scenarios where the multitask model’s caption is also correct but the baseline caption is a bit more specific, e.g., “a man is holding a gun” vs “a man is shooting a gun”. Finally, Table 6 presents output examples of our entailment generation multi-task model (Sec. 5.3), showing how the model accurately learns to produce logically implied subsets of the premise. 6 Conclusion We presented a multimodal, multi-task learning approach to improve video captioning by incorporating temporally and logically directed knowledge via video prediction and entailment generation tasks. We achieve the best reported results (and rank) on three datasets, based on multiple automatic and human evaluations. We also show mutual multi-task improvements on the new entailment generation task. In future work, we are applying our entailment-based multi-task paradigm to other directed language generation tasks such as image captioning and document summarization. Acknowledgments We thank the anonymous reviewers for their helpful comments. This work was partially supported by a Google Faculty Research Award, an IBM Faculty Award, a Bloomberg Data Science Research Grant, and NVidia GPU awards. 1281 References Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Pontil. 2007. Multi-task feature learning. In NIPS. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In EMNLP. Rich Caruana. 1998. Multitask learning. In Learning to learn, Springer, pages 95–133. David L Chen and William B Dolan. 2011. Collecting highly parallel data for paraphrase evaluation. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1. Association for Computational Linguistics, pages 190–200. Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Doll´ar, and C Lawrence Zitnick. 2015. Microsoft COCO captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325 . Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. ImageNet: A large-scale hierarchical image database. In CVPR. IEEE, pages 248–255. Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In EACL. Bradley Efron and Robert J Tibshirani. 1994. An introduction to the bootstrap. CRC press. Sergio Guadarrama, Niveda Krishnamoorthy, Girish Malkarnenkar, Subhashini Venugopalan, Raymond Mooney, Trevor Darrell, and Kate Saenko. 2013. Youtube2text: Recognizing and describing arbitrary activities using semantic hierarchies and zero-shot recognition. In CVPR. pages 2712–2719. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Haiqi Huang, Yueming Lu, Fangwei Zhang, and Songlin Sun. 2013. A multi-modal clustering method for web videos. In International Conference on Trustworthy Computing and Services. pages 163– 169. Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML. Sergio Jimenez, George Duenas, Julia Baquero, Alexander Gelbukh, Av Juan Dios B´atiz, and Av Mendiz´abal. 2014. UNAL-NLP: Combining soft cardinality features for semantic textual similarity, relatedness and entailment. In In SemEval. pages 732–742. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR. Vladyslav Kolesnyk, Tim Rockt¨aschel, and Sebastian Riedel. 2016. Generating natural language inference chains. arXiv preprint arXiv:1606.01404 . Abhishek Kumar and Hal Daum´e III. 2012. Learning task grouping and overlap in multi-task learning. In ICML. Alice Lai and Julia Hockenmaier. 2014. Illinois-LH: A denotational and distributional approach to semantics. Proc. SemEval 2:5. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 workshop. volume 8. Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2016. Multi-task sequence to sequence learning. In ICLR. Eric W Noreen. 1989. Computer-intensive methods for testing hypotheses. Wiley New York. Pingbo Pan, Zhongwen Xu, Yi Yang, Fei Wu, and Yueting Zhuang. 2016a. Hierarchical recurrent neural encoder for video representation with application to captioning. In CVPR. pages 1029–1038. Yingwei Pan, Tao Mei, Ting Yao, Houqiang Li, and Yong Rui. 2016b. Jointly modeling embedding and translation to bridge video and language. In CVPR. pages 4594–4602. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In ACL. pages 311–318. Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. In ICLR. Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. 2012. UCF101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402 . Nitish Srivastava, Elman Mansimov, and Ruslan Salakhutdinov. 2015. Unsupervised learning of video representations using lstms. In ICML. pages 843–852. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In NIPS. pages 3104–3112. 1282 Christian Szegedy, Sergey Ioffe, and Vincent Vanhoucke. 2016. Inception-v4, inception-resnet and the impact of residual connections on learning. In CoRR. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going deeper with convolutions. In CVPR. pages 1–9. Jesse Thomason, Subhashini Venugopalan, Sergio Guadarrama, Kate Saenko, and Raymond J Mooney. 2014. Integrating language and vision to generate natural language descriptions of videos in the wild. In COLING. Atousa Torabi, Christopher Pal, Hugo Larochelle, and Aaron Courville. 2015. Using descriptive video services to create a large data source for video annotation research. arXiv preprint arXiv:1503.01070 . Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. CIDEr: Consensus-based image description evaluation. In CVPR. pages 4566–4575. Subhashini Venugopalan, Lisa Anne Hendricks, Raymond Mooney, and Kate Saenko. 2016. Improving lstm-based video description with linguistic knowledge mined from text. In EMNLP. Subhashini Venugopalan, Marcus Rohrbach, Jeffrey Donahue, Raymond Mooney, Trevor Darrell, and Kate Saenko. 2015a. Sequence to sequence-video to text. In CVPR. pages 4534–4542. Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, and Kate Saenko. 2015b. Translating videos to natural language using deep recurrent neural networks. In NAACL HLT. Jun Xu, Tao Mei, Ting Yao, and Yong Rui. 2016. Msrvtt: A large video description dataset for bridging video and language. In CVPR. pages 5288–5296. Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo Larochelle, and Aaron Courville. 2015. Describing videos by exploiting temporal structure. In CVPR. pages 4507–4515. Haonan Yu, Jiang Wang, Zhiheng Huang, Yi Yang, and Wei Xu. 2016. Video paragraph captioning using hierarchical recurrent neural networks. In CVPR. 1283
2017
117
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1284–1296 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1118 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1284–1296 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1118 Enriching Complex Networks with Word Embeddings for Detecting Mild Cognitive Impairment from Speech Transcripts Leandro B. dos Santos1, Edilson A. Corrˆea Jr1, Osvaldo N. Oliveira Jr2, Diego R. Amancio1, Let´ıcia L. Mansur3, Sandra M. Alu´ısio1 1 Institute of Mathematics and Computer Science, University of S˜ao Paulo, S˜ao Carlos, S˜ao Paulo, Brazil 2 S˜ao Carlos Institute of Physics, University of S˜ao Paulo, S˜ao Carlos, S˜ao Paulo, Brazil 3 Department of Physiotherapy, Speech Pathology and Occupational Therapy, University of S˜ao Paulo, S˜ao Paulo, S˜ao Paulo, Brazil {leandrobs,edilsonacjr,lamansur}@usp.br, [email protected] {diego,sandra}@icmc.usp.br Abstract Mild Cognitive Impairment (MCI) is a mental disorder difficult to diagnose. Linguistic features, mainly from parsers, have been used to detect MCI, but this is not suitable for large-scale assessments. MCI disfluencies produce nongrammatical speech that requires manual or high precision automatic correction of transcripts. In this paper, we modeled transcripts into complex networks and enriched them with word embedding (CNE) to better represent short texts produced in neuropsychological assessments. The network measurements were applied with well-known classifiers to automatically identify MCI in transcripts, in a binary classification task. A comparison was made with the performance of traditional approaches using Bag of Words (BoW) and linguistic features for three datasets: DementiaBank in English, and Cinderella and Arizona-Battery in Portuguese. Overall, CNE provided higher accuracy than using only complex networks, while Support Vector Machine was superior to other classifiers. CNE provided the highest accuracies for DementiaBank and Cinderella, but BoW was more efficient for the Arizona-Battery dataset probably owing to its short narratives. The approach using linguistic features yielded higher accuracy if the transcriptions of the Cinderella dataset were manually revised. Taken together, the results indicate that complex networks enriched with embedding is promising for detecting MCI in large-scale assessments. 1 Introduction Mild Cognitive Impairment (MCI) can affect one or multiple cognitive domains (e.g. memory, language, visuospatial skills and executive functions), and may represent a pre-clinical stage of Alzheimer’s disease (AD). The impairment that affects memory, referred to as amnestic MCI, is the most frequent, with the highest conversion rate for AD, at 15% per year versus 1 to 2% for the general population. Since dementias are chronic and progressive diseases, their early diagnosis ensures a greater chance of success to engage patients in non-pharmacological treatment strategies such as cognitive training, physical activity and socialization (Teixeira et al., 2012). Language is one of the most efficient information sources to assess cognitive functions. Changes in language usage are frequent in patients with dementia and are normally first recognized by the patients themselves or their family members. Therefore, the automatic analysis of discourse production is promising in diagnosing MCI at early stages, which may address potentially reversible factors (Muangpaisan et al., 2012). Proposals to detect language-related impairment in dementias include machine learning (Jarrold et al., 2010; Roark et al., 2011; Fraser et al., 2014, 2015), magnetic resonance imaging (Dyrba et al., 2015), and data screening tests added to demographic information (Weakley et al., 2015). Discourse production (mainly narratives) is attractive because it allows the analysis of linguistic microstructures, including phonetic-phonological, morphosyntactic and semantic-lexical components, as well as semantic-pragmatic macrostructures. Automated discourse analysis based on Natural Language Processing (NLP) resources and tools to diagnose dementias via machine learning methods has been used for English language (Lehr et al., 1284 2012; Jarrold et al., 2014; Orimaye et al., 2014; Fraser et al., 2015; Davy et al., 2016) and for Brazilian Portuguese (Alu´ısio et al., 2016). A variety of features are required for this analysis, including Part-of-Speech (PoS), syntactic complexity, lexical diversity and acoustic features. Producing robust tools to extract these features is extremely difficult because speech transcripts used in neuropsychological evaluations contain disfluencies (repetitions, revisions, paraphasias) and patient’s comments about the task being evaluated. Another problem in using linguistic knowledge is the high dependence on manually created resources, such as hand-crafted linguistic rules and/or annotated corpora. Even when traditional statistical techniques (Bag of Words or ngrams) are applied, problems still appear in dealing with disfluencies, because mispronounced words will not be counted together. Indeed, other types of disfluencies (repetition, amendments, patient’s comments about the task) will be counted, thus increasing the vocabulary. An approach applied successfully to several areas of NLP (Mihalcea and Radev, 2011), which may suffer less from the problems mentioned above, relies on the use of complex networks and graph theory. The word adjacency network model (i Cancho and Sol´e, 2001; Roxas and Tapang, 2010; Amancio et al., 2012a; Amancio, 2015b) has provided good results in text classification (de Arruda et al., 2016) and related tasks, namely author detection (Amancio, 2015a), identification of literary movements (Amancio et al., 2012c), authenticity verification (Amancio et al., 2013) and word sense discrimination (Amancio et al., 2012b). In this paper, we show that speech transcripts (narratives or descriptions) can be modeled into complex networks that are enriched with word embedding in order to better represent short texts produced in these assessments. When applied to a machine learning classifier, the complex network features were able to distinguish between control participants and mild cognitive impairment participants. Discrimination of the two classes could be improved by combining complex networks with linguistic and traditional statistical features. With regard to the task of detecting MCI from transcripts, this paper is, to the best of our knowledge, the first to: a) show that classifiers using features extracted from transcripts modeled into complex networks enriched with word embedding present higher accuracy than using only complex networks for 3 datasets; and b) show that for languages that do not have competitive dependency and constituency parsers to exploit syntactic features, e.g. Brazilian Portuguese, complex networks enriched with word embedding constitute a source to extract new, language independent features from transcripts. 2 Related Work Detection of memory impairment has been based on linguistic, acoustic, and demographic features, in addition to scores of neuropsychological tests. Linguistic and acoustic features were used to automatically detect aphasia (Fraser et al., 2014); and AD (Fraser et al., 2015) or dementia (Orimaye et al., 2014) in the public corpora of DementiaBank1. Other studies distinguished different types of dementia (Garrard et al., 2014; Jarrold et al., 2014), in which speech samples were elicited using the Picnic picture of the Western Aphasia Battery (Kertesz, 1982). Davy et al. (2016) also used the Picnic scene to detect MCI, where the subjects were asked to write (by hand) a detailed description of the scene. As for automatic detection of MCI in narrative speech, Roark et al. (2011) extracted speech features and linguistic complexity measures of speech samples obtained with the Wechsler Logical Memory (WLM) subtest (Wechsler et al., 1997), and Lehr et al. (2012) fully automatized the WLM subtest. In this test, the examiner tells a short narrative to a subject, who then retells the story to the examiner, immediately and after a 30minute delay. WLM scores are obtained by counting the number of story elements recalled. T´oth et al. (2015) and Vincze et al. (2016) used short animated films to evaluate immediate and delayed recalls in MCI patients who were asked to talk about the first film shown, then about their previous day, and finally about another film shown last. T´oth et al. (2015) adopted automatic speech recognition (ASR) to extract a phonetic level segmentation, which was used to calculate acoustic features. Vincze et al. (2016) used speech, morphological, semantic, and demographic features collected from their speech transcripts to automatically identify patients suffering from MCI. For the Portuguese language, machine learning 1talkbank.org/DementiaBank/ 1285 algorithms were used to identify subjects with AD and MCI. Alu´ısio et al. (2016) used a variety of linguistic metrics, such as syntactic complexity, idea density (da Cunha et al., 2015), and text cohesion through latent semantics. NLP tools with high precision are needed to compute these metrics, which is a problem for Portuguese since no robust dependency or constituency parsers exist. Therefore, the transcriptions had to be manually revised; they were segmented into sentences, following a semantic-structural criterion and capitalization was applied. The authors also removed disfluencies and inserted omitted subjects when they were hidden, in order to reduce parsing errors. This process is obviously expensive, which has motivated us to use complex networks in the present study to model transcriptions and avoid a manual preprocessing step. 3 Modeling and Characterizing Texts as Complex Networks The theory and concepts of complex networks have been used in several NLP tasks (Mihalcea and Radev, 2011; Cong and Liu, 2014), such as text classification (de Arruda et al., 2016), summarization (Antiqueira et al., 2009; Amancio et al., 2012a) and word sense disambiguation (Silva and Amancio, 2012). In this study, we used the word co-occurrence model (also called word adjacency model) because most of the syntactical relations occur among neighboring words (i Cancho et al., 2004). Each distinct word becomes a node and words that are adjacent in the text are connected by an edge. Mathematically, a network is defined as an undirected graph G = {V, E}, formed by a set V = {v1, v2, ..., vn} of nodes (words) and a set E = {e1, e2, ..., em} of edges (co-occurrence) that are represented by an adjacency matrix A, whose elements Aij are equal to 1 whenever there is an edge connecting nodes (words) i and j, and equal to 0 otherwise. Before modeling texts into complex networks, it is often necessary to do some preprocessing in the raw text. Preprocessing starts with tokenization where each document/text is divided into tokens (meaningful elements, e.g., words and punctuation marks) and then stopwords and punctuation marks are removed, since they have little semantic meaning. One last step we decided to eliminate from the preprocessing pipeline is lemmatization, which transforms each word into its canonical 1 2 5 4 0 7 6 3 11 10 9 8 water running floor boy taking cookies cookie jar stool falling girl asking Figure 1: Example of co-occurrence network enriched with semantic information for the following transcription: “The water’s running on the floor. Boy’s taking cookies out of cookie out of the cookie jar. The stool is falling over. The girl was asking for a cookie.”. The solid edges of the network represent co-occurrence edges and the dotted edges represent connections between words that had similarity higher than 0.5. form. This decision was made based on two factors. First, a recent work has shown that lemmatization has little or no influence when network modeling is adopted in related tasks (Machicao et al., 2016). Second, the lemmatization process requires part-of-speech (POS) tagging that may introduce undesirable noises/errors in the text, since the transcriptions in our work contain disfluencies. Another problem with transcriptions in our work is their size. As demonstrated by Amancio (2015c), classification of small texts using networks can be impaired, since short texts have almost linear networks, and the topological measures of these networks have little or no information relevant to classification. To solve this problem, we adapted the approach of inducing language networks from word embeddings, proposed by Perozzi et al. (2014) to enrich the networks with semantic information. In their work, language networks were generated from continuous word representations, in which each word is represented by a dense, real-valued vector obtained by training neural networks in the language model task (or variations, such as context prediction) (Bengio et al., 2003; Collobert et al., 2011; Mikolov et al., 2013a,b). This structure is known to capture syntactic and semantic information. Perozzi et al. (2014), in particular, take advantage of word embeddings to build networks where each word is 1286 (a) (b) Figure 2: Example of (a) co-occurrence network created for a transcript of the Cookie Theft dataset (see Supplementary Information, Section A) and (b) the same co-occurrence network enriched with semantic information. Note that (b) is a more informative network than (a), since (a) is practically a linear network. a vertex and edges are defined by similarity between words established by the proximity of the word vectors. Following this methodology, in our model we added new edges to the co-occurrence networks considering similarities between words, that is, for all pairs of words in the text that were not connected, an edge was created if their vectors (from word embedding) had a cosine similarity higher than a given threshold. Figure 1 shows an example of a co-occurrence network enriched by similarity links (the dotted edges). The gain in information by enriching a co-occurrence network with semantic information is readily apparent in Figure 2. 4 Datasets, Features and Methods 4.1 Datasets The datasets2 used in our study consisted of: (i) manually segmented and transcribed samples from the DementiaBank and Cinderella story and (ii) transcribed samples of Arizona Battery for Communication Disorders of Dementia (ABCD) automatically segmented into sentences, since we are working towards a fully automated system to detect MCI in transcripts and would like to evaluate a dataset which was automatically processed. The DementiaBank dataset is composed of short English descriptions, while the Cinderella dataset contains longer Brazilian Portuguese narratives. ABCD dataset is composed of very short narratives, also in Portuguese. Below, we describe 2All datasets are made available in the same representations used in this work, upon request to the authors. in further detail the datasets, participants, and the task in which they were used. 4.1.1 The Cookie Theft Picture Description Dataset The clinical dataset used for the English language was created during a longitudinal study conducted by the University of Pittsburgh School of Medicine on Alzheimer’s and related dementia, funded by the National Institute of Aging. To be eligible for inclusion in the study, all participants were required to be above 44 years of age, have at least 7 years of education, no history of nervous system disorders nor be taking neuroleptic medication, have an initial Mini-Mental State Exam (MMSE) score of 10 or greater, and be able to give informed consent. The dataset contains transcripts of verbal interviews with AD and related Dementia patients, including those with MCI (for further details see (Becker et al., 1994)). We used 43 transcriptions with MCI in addition to another 43 transcriptions sampled from 242 healthy elderly people to be used as the control group. Table 1 shows the demographic information for the two diagnostic groups. Demographic Control MCI Avg. Age (SD) 64.1 (7.2) 69.3 (8.2) No. of Male/Female 23/20 27/16 Table 1: Demographic information of participants in the Cookie Theft dataset. For this dataset, interviews were conducted in English and narrative speech was elicited using the Cookie Theft picture (Goodglass et al., 2001) (Figure 3 from Goodglass et al. (2001) in Section A.1). During the interview, patients were given the picture and were told to discuss everything they could see happening in the picture. The patients’ verbal utterances were recorded and then transcribed into the CHAT (Codes for the Human Analysis of Transcripts) transcription format (MacWhinney, 2000). We extracted the word-level transcript patient sentences from the CHAT files and discarded the annotations, as our goal was to create a fully automated system that does not require the input of a human annotator. We automatically removed filled pauses such as uh, um , er , and ah (e.g. uh it seems to be summer out), short false starts (e.g. just t the ones ), and repetition (e.g. mother’s finished certain of the the dishes ), as in (Fraser et al., 1287 2015). The control group had an average of 9.58 sentences per narrative, with each sentence having an average of 9.18 words; while the MCI group had an average of 10.97 sentences per narrative, with 10.33 words per sentence in average. 4.1.2 The Cinderella Narrative Dataset The dataset examined in this study included 20 subjects with MCI and 20 normal elderly control subjects, as diagnosed at the Medical School of the University of S˜ao Paulo (FMUSP). Table 2 shows the demographic information of the two diagnostic groups, which were also used in Alu´ısio et al. (2016). Demographic Control MCI Avg. Age (SD) 74.8 (11.3) 73.3 (5.9) Avg. Years of 11.4 (2.6) 10.8 (4.5) Education (SD) No. of Male/Female 27/16 29/14 Table 2: Demographic information of participants in the Cinderella dataset. The criteria used to diagnose MCI came from Petersen (2004). Diagnostics were carried out by a multidisciplinary team consisting of psychiatrists, geriatricians, neurologists, neuropsychologists, speech pathologists, and occupational therapists, by a criterion of consensus. Inclusion criteria for the control group were elderlies with no cognitive deficits and preservation of functional capacity in everyday life. The exclusion criteria for the normal group were: poorly controlled clinical diseases, sensitive deficits that were not being compensated for and interfered with the performance in tests, and other neurological or psychiatric diagnoses associated with dementia or cognitive deficits and use of medications in doses that affected cognition. Speech narrative samples were elicited by having participants tell the Cinderella story; participants were given as much time as they needed to examine a picture book illustrating the story (Figure 4 in Section A). When each participant had finished looking at the pictures, the examiner asked the subject to tell the story in their own words, as in Saffran et al. (1989). The time was recorded, but there was no limit imposed to the narrative length. If the participant had difficulty initiating or continuing speech, or took a long pause, an evaluator would use the stimulus question “What happens next ?”, seeking to encourage the participant to continue his/her narrative. When the subject was unable to proceed with the narrative, the examiner asked if he/she had finished the story and had something to add. Each speech sample was recorded and then manually transcribed at the word level following the NURC/SP N. 338 EF and 331 D2 transcription norms3. Other tests were applied after the narrative, in the following sequence: phonemic verbal fluency test, action verbal fluency, Camel and Cactus test (Bozeat et al., 2000), and Boston Naming test (Kaplan et al., 2001), in order to diagnose the groups. Since our ultimate goal is to create a fully automated system that does not require the input of a human annotator, we manually segmented sentences to simulate a high-quality ASR transcript with sentence segmentation, and we automatically removed the disfluencies following the same guidelines of TalkBank project. However, other disfluencies (revisions, elaboration, paraphasias and comments about the task) were kept. The control group had an average of 30.80 sentences per narrative, and each sentence averaged 12.17 words. As for the MCI group, it had an average of 29.90 sentences per narrative, and each sentence averaged 13.03 words. We also evaluated a different version of the dataset used in Alu´ısio et al. (2016), where narratives were manually annotated and revised to improve parsing results. The revision process was the following: (i) in the original transcript, segments with hesitations or repetitions of more than one word or segment of a single word were annotated to become a feature and then removed from the narrative to allow the extraction of features from parsing; (ii) empty emissions, which were comments unrelated to the topic of narration or confirmations, such as “n´e” (alright), were also annotated and removed; (iii) prolongations of vowels, short pauses and long pauses were also annotated and removed; and (iv) omitted subjects in sentences were inserted. In this revised dataset, the control group had an average of 45.10 sentences per narrative, and each sentence averaged 8.17 words. The MCI group had an average of 31.40 sentences per narrative, with each sentence averaging 10.91 words. 4.1.3 The ABCD Dataset The subtest of immediate/delayed recall of narratives of the ABCD battery was administered to 23 3albertofedel.blogspot.com.br/2010_11_ 01_archive.html 1288 participants with a diagnosis of MCI and 20 normal elderly control participants, as diagnosed at the Medical School of the University of S˜ao Paulo (FMUSP). MCI subjects produced 46 narratives while the control group produced 39 ones. In order to carry out experiments with a balanced corpus, as with the previous two datasets, we excluded seven transcriptions from the MCI group. We used the automatic sentence segmentation method referred to as DeepBond (Treviso et al., 2017) in the transcripts. Table 3 shows the demographic information. The control group had an average of 5.23 sentences per narrative, with 11 words per sentence on average, and the MCI group had an average of 4.95 sentences per narrative, with an average of 12.04 words per sentence. Interviews were conducted in Portuguese and the subject listened to the examiner read a short narrative. The subject then retold the narrative to the examiner twice: once immediately upon hearing it and again after a 30-minute delay (Bayles and Tomoeda, 1991). Each speech sample was recorded and then manually transcribed at the word level following the NURC/SP N. 338 EF and 331 D2 transcription norms. Demographic Control MCI Avg. Age (SD) 61 (7.5) 72,0 (7.4) Avg. Years of 16 (7.6) 13.3 (4.2) Education (SD) No. of Male/Female 6/14 16/7 Table 3: Demographic information of participants in the ABCD dataset. 4.2 Features Features of three distinct natures were used to classify the transcribed texts: topological metrics of co-occurrence networks, linguistic features and bag of words representations. 4.2.1 Topological Characterization of Networks Each transcription was mapped into a cooccurrence network, and then enriched via word embeddings using the cosine similarity of words. Since the occurrence of out-of-vocabulary words is common in texts of neuropsychological assessments, we used the method proposed by Bojanowski et al. (2016) to generate word embeddings. This method extends the skip-gram model to use character-level information, with each word being represented as a bag of character n-grams. It provides some improvement in comparison with the traditional skip-gram model in terms of syntactic evaluation (Mikolov et al., 2013b) but not for semantic evaluation. Once the network has been enriched, we characterize its topology using the following ten measurements: 1. PageRank: is a centrality measurement that reflects the relevance of a node based on its connections to other relevant nodes (Brin and Page, 1998); 2. Betweenness: is a centrality measurement that considers a node as relevant if it is highly accessed via shortest paths. The betweenness of a node v is defined as the fraction of shortest paths going through node v; 3. Eccentricity: of a node is calculated by measuring the shortest distance from the node to all other vertices in the graph and taking the maximum; 4. Eigenvector centrality: is a measurement that defines the importance of a node based on its connectivity to high-rank nodes; 5. Average Degree of the Neighbors of a Node: is the average of the degrees of all its direct neighbors; 6. Average Shortest Path Length of a Node: is the average distance between this node and all other nodes of the network; 7. Degree: is the number of edges connected to the node; 8. Assortativity Degree: or degree correlation measures the tendency of nodes to connect to other nodes that have similar degree; 9. Diameter: is defined as the maximum shortest path; 10. Clustering Coefficient: measures the probability that two neighbors of a node are connected. Most of the measurements described above are local measurements, i.e. each node i possesses a value Xi, so we calculated the average µ(X), standard deviation σ(X) and skewness γ(X) for each measurement (Amancio, 2015b). 1289 4.2.2 Linguistic Features Linguistic features for classification of neuropsychological assessments have been used in several studies (Roark et al., 2011; Jarrold et al., 2014; Fraser et al., 2014; Orimaye et al., 2014; Fraser et al., 2015; Vincze et al., 2016; Davy et al., 2016). We used the Coh-Metrix4(Graesser et al., 2004) tool to extract features from English transcripts, resulting in 106 features. The metrics are divided into eleven categories: Descriptive, Text Easability Principal Component, Referential Cohesion, Latent Semantic Analysis (LSA), Lexical Diversity, Connectives, Situation Model, Syntactic Complexity, Syntactic Pattern Density, Word Information, and Readability (Flesch Reading Ease, Flesch-Kincaid Grade Level, Coh-Metrix L2 Readability). For Portuguese, Coh-Metrix-Dementia (Alu´ısio et al., 2016) was used. The metrics affected by constituency and dependency parsing were not used because they are not robust with disfluencies. Metrics based on manual annotation (such as proportion short pauses, mean pause duration, mean number of empty words, and others) were also discarded. The metrics of Coh-MetrixDementia are divided into twelve categories: Ambiguity, Anaphoras, Basic Counts, Connectives, Co-reference Measures, Content Word Frequencies, Hypernyms, Logic Operators, Latent Semantic Analysis, Semantic Density, Syntactical Complexity, and Tokens. The metrics used are shown in detail in Section A.2. In total, 58 metrics were used, from the 73 available on the website5. 4.2.3 Bag of Words The representation of text collections under the BoW assumption (i.e., with no information relating to word order) has been a robust solution for text classification. In this methodology, transcripts are represented by a table in which the columns represent the terms (or existing words) in the transcripts and the values represent frequency of a term in a document. 4.3 Classification Algorithms In order to quantify the ability of the topological characterization of networks, linguistic metrics and BoW features were used to distinguish subjects with MCI from healthy controls. We 4cohmetrix.com 5http://143.107.183.175:22380 employed four machine learning algorithms to induce classifiers from a training set. These techniques were the Gaussian Naive Bayes (GNB), k-Nearest Neighbor (k-NN), Support Vector Machine (SVM), linear and radial bases functions (RBF), and Random Forest (RF). We also combined these classifiers through ensemble and multi-view learning. In ensemble learning, multiple models/classifiers are generated and combined using a majority vote or the average of class probabilities to produce a single result (Zhou, 2012). In multi-view learning, multiple classifiers are trained in different feature spaces and thus combined to produce a single result. This approach is an elegant solution in comparison to combining all features in the same vector or space, for two main reasons. First, combination is not a straightforward step and may lead to noise insertion since the data have different natures. Second, using different classifiers for each feature space allows for different weights to be given for each type of feature, and these weights can be learned by a regression method to improve the model. In this work, we used majority voting to combine different feature spaces. 5 Experiments and Results All experiments were conducted using the Scikitlearn6 (Pedregosa et al., 2011), with classifiers evaluated on the basis of classification accuracy i.e. the total proportion of narratives which were correctly classified. The evaluation was performed using 5-fold cross-validation instead of the well-accepted 10-fold cross-validation because the datasets in our study were small and the test set would have shrunk, leading to less precise measurements of accuracy. The threshold parameter was optimized with the best values being 0.7 in the Cookie Theft dataset and 0.4 in both the Cinderella and ABCD datasets. We used the model proposed by Bojanowski et al. (2016) with default parameters (100 dimensional embeddings, context window equal to 5 and 5 epochs) to generate word embedding. We trained the models in Portuguese and English Wikipedia dumps from October and November 2016 respectively. The accuracy in classification is given in Tables 4 through 6. CN, CNE, LM, and BoW denote, respectively, complex networks, complex network 6http://scikit-learn.org 1290 enriched with embedding, linguistic metrics and Bag of Words, and CNE-LM, CNE-BoW, LMBoW and CNE-LM-BoW refer to combinations of the feature spaces (multiview learning), using the majority vote. Cells with the “–” sign mean that it was not possible to apply majority voting because there were two classifiers. The last line represents the use of an ensemble of machine learning algorithms, in which the combination used was the majority voting in both ensemble and multiview learning. In general, CNE outperforms the approach using only complex networks (CN), while SVM (Linear or RBF kernel) provides higher accuracy than other machine learning algorithms. The results for the three datasets show that characterizing transcriptions into complex networks is competitive with other traditional methods, such as the use of linguistic metrics. In fact, among the three types of features, using enriched networks (CNE) provided the highest accuracies in two datasets (Cookie Theft and original Cinderella). For the ABCD dataset, which contains short narratives, the small length of the transcriptions may have had an effect, since BoW features led to the highest accuracy. In the case of the revised Cinderella dataset, segmented into sentences and capitalized as reported in Alu´ısio et al. (2016), Table 7 shows that the manual revision was an important factor, since the highest accuracies were obtained with the approach based on linguistic metrics (LM). However, this process of manually removing disfluencies demands time; therefore it is not practical for large-scale assessments. Ensemble and multi-view learning were helpful for the Cookie Theft dataset, in which multi-view learning achieved the highest accuracy (65% of accuracy for narrative texts, a 3% of improvement compared to the best individual classifier). However, neither multi-view or ensemble learning enhanced accuracy in the Cinderella dataset, where SVM-RBF with CNE space achieved the highest accuracy (65%). For the ABCD dataset, multiview CNE-LM-BoW with SVM-RBF and KNN classifiers improved the accuracy to 4% and 2%, respectively. Somewhat surprising were the results of SVM with linear kernel in BoW feature space (75% of accuracy). 6 Conclusions and Future Work In this study, we employed metrics of topological properties of CN in a machine learning classification approach to distinguish between healthy patients and patients with MCI. To the best of our knowledge, these metrics have never been used to detect MCI in speech transcripts; CN were enriched with word embeddings to better represent short texts produced in neuropsychological assessments. The topological properties of CN outperform traditional linguistic metrics in individual classifiers’ results. Linguistic features depend on grammatical texts to present good results, as can be seen in the results of the manually processed Cinderella dataset (Table 7). Furthermore, we found that combining machine and multi-view learning can improve accuracy. The accuracies found here are comparable to the values reported by other authors, ranging from 60% to 85% (Prud’hommeaux and Roark, 2011; Lehr et al., 2012; T´oth et al., 2015; Vincze et al., 2016), which means that it is not easy to distinguish between healthy subjects and those with cognitive impairments. The comparison with our results is not straightforward, though, because the databases used in the studies are different. There is a clear need for publicly available datasets to compare different methods, which would optimize the detection of MCI in elderly people. In future work, we intend to explore other methods to enrich CN, such as the Recurrent Language Model, and use other metrics to characterize an adjacency network. The pursuit of these strategies is relevant because language is one of the most efficient information sources to evaluate cognitive functions, commonly used in neuropsychological assessments. As this work is ongoing, we will keep collecting new transcriptions of the ABCD retelling subtest to increase the corpus size and obtain more reliable results in our studies. Our final goal is to apply neuropsychological assessment batteries, such as the ABCD retelling subtest, to mobile devices, specifically tablets. This adaptation will enable large-scale applications in hospitals and facilitate the maintenance of application history in longitudinal studies, by storing the results in databases immediately after the test application. 1291 Classifier CN CNE LM BoW CNE-LM CNE-BoW LM-BoW CNE-LM-BoW SVM-Linear 52 55 56 59 – – – 60 SVM-RBF 56 62 58 60 – – – 65 k-NN 59 61 46 57 – – – 59 RF 52 47 45 48 – – – 50 G-NB 51 48 56 55 – – – 50 Ensemble 56 60 54 58 57 60 63 65 Table 4: Classification accuracy achieved on Cookie Theft dataset. Classifier CN CNE LM BoW CNE-LM CNE-BoW LM-BoW CNE-LM-BoW SVM-Linear 52 60 52 50 – – – 52 SVM-RBF 57 65 47 37 – – – 50 k-NN 47 50 47 37 – – – 37 RF 55 57 47 45 – – – 52 G-NB 47 52 47 55 – – – 52 Ensemble 52 60 50 37 57 52 50 47 Table 5: Classification accuracy achieved on Cinderella dataset. Classifier CN CNE LM BoW CNE-LM CNE-BoW LM-BoW CNE-LM-BoW SVM-Linear 56 69 51 75 – – – 74 SVM-RBF 54 57 66 67 – – – 71 k-NN 56 56 69 63 – – – 71 RF 54 62 70 64 – – – 69 G-NB 61 55 55 65 – – – 65 Ensemble 55 61 62 72 69 68 75 73 Table 6: Classification accuracy achieved on ABCD dataset. Classifier CN CNE LM BoW SVM-Linear 50 65 65 52 SVM-RBF 57 67 72 55 KNN 42 47 55 50 RF 52 47 70 45 G-NB 52 65 62 45 Ensemble 52 60 72 45 Table 7: Classification accuracy achieved on Cinderella dataset manually processed to revise nongrammatical sentences. Acknowledgments This work was supported by CAPES, CNPq, FAPESP, and Google Research Awards in Latin America. We would like to thank NVIDIA for their donation of GPU. References Sandra M. Alu´ısio, Andre L. da Cunha, and Carolina Scarton. 2016. Evaluating progression of alzheimer’s disease by regression and classification methods in a narrative language test in portuguese. In Jo˜ao Silva, Ricardo Ribeiro, Paulo Quaresma, Andr´e Adami, and Ant´onio Branco, editors, International Conference on Computational Processing of the Portuguese Language. Springer, pages 109–114. https://doi.org/10.1007/978-3-319-41552-9 10. Diego R. Amancio. 2015a. Authorship recognition via fluctuation analysis of network topology and word intermittency. Journal of Statistical Mechanics: Theory and Experiment 2015(3):P03005. https://doi.org/10.1088/17425468/2015/03/P03005. Diego R. Amancio. 2015b. A complex network approach to stylometry. PloS one 10(8):e0136076. https://doi.org/10.1371/journal.pone.0136076. Diego R. Amancio. 2015c. Probing the topological properties of complex networks modeling short written texts. PloS one 10(2):1–17. https://doi.org/10.1371/journal.pone.0118394. Diego R. Amancio, Eduardo G. Altmann, Diego Rybski, Osvaldo N. Oliveira Jr., and Luciano da F. Costa. 2013. Probing the statistical properties of unknown texts: Application to the voynich manuscript. PLOS ONE 8(7):1–10. https://doi.org/10.1371/journal.pone.0067310. Diego R. Amancio, Maria G. V. Nunes, Osvaldo N. Oliveira Jr., and Luciano F. Costa. 2012a. Extractive summarization using complex networks and syntactic dependency. Physica A: Statistical Me1292 chanics and its Applications 391(4):1855–1864. https://doi.org/10.1016/j.physa.2011.10.015. Diego R. Amancio, O.N. Oliveira Jr., and Luciano da F. Costa. 2012b. Unveiling the relationship between complex networks metrics and word senses. EPL (Europhysics Letters) 98(1):18002. https://doi.org/10.1209/0295-5075/98/18002. Diego R. Amancio, Osvaldo N. Oliveira Jr., and Luciano F. Costa. 2012c. Identification of literary movements using complex networks to represent texts. New Journal of Physics 14(4):043029. https://doi.org/10.1088/1367-2630/14/4/043029. Lucas Antiqueira, Osvaldo N. Oliveira Jr., Luciano da Fontoura Costa, and Maria das Grac¸as Volpe Nunes. 2009. A complex network approach to text summarization. Information Sciences 179(5):584 – 599. Kathryn A. Bayles and Cheryl K. Tomoeda. 1991. ABCD: Arizona Battery for Communication Disorders of Dementia. Tucson, AZ: Canyonlands Publishing. James T. Becker, Franc¸ois Boiler, Oscar L. Lopez, Judith Saxton, and Karen L. McGonigle. 1994. The natural history of alzheimer’s disease: description of study cohort and accuracy of diagnosis. Archives of Neurology 51(6):585–594. https://doi.org/10.1001/archneur.1994.00540180063015. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. journal of machine learning research 3(Feb):1137–1155. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. arXiv preprint arXiv:1607.04606 . Sasha Bozeat, Matthew A. Ralph, Karalyn Patterson, Peter Garrard, and John R. Hodges. 2000. Non-verbal semantic impairment in semantic dementia. Neuropsychologia 38(9):1207–1215. https://doi.org/10.1016/S0028-3932(00)00034-8. Sergey Brin and Lawrence Page. 1998. The anatomy of a large-scale hypertextual web search engine. In International Conference on World Wide Web. Elsevier, pages 107–117. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12(Aug):2493–2537. Jin Cong and Haitao Liu. 2014. Approaching human language with complex networks. Physics of Life Reviews 11(4):598 – 618. https://doi.org/10.1016/j.plrev.2014.04.004. Andre L. da Cunha, Lucilene B. de Sousa, Let´ıcia L. Mansur, and Sandra M. Alu´ısio. 2015. Automatic proposition extraction from dependency trees: Helping early prediction of alzheimer’s disease from narratives. In Proceedings of the 28th International Symposium on ComputerBased Medical Systems. Institute of Electrical and Electronics Engineers, pages 127–130. https://doi.org/10.1109/CBMS.2015.19. Weissenbacher Davy, Johnson A. Travis, Wojtulewicz Laura, Dueck Amylou, Locke Dona, Caselli Richard, and Gonzalez Graciela. 2016. Towards automatic detection of abnormal cognitive decline and dementia through linguistic analysis of writing samples. In Proceedings of the 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 1198–1207. https://doi.org/10.18653/v1/N16-1143. Henrique F. de Arruda, Luciano F. Costa, and Diego R. Amancio. 2016. Using complex networks for text classification: Discriminating informative and imaginative documents. EPL (Europhysics Letters) 113(2):28007. https://doi.org/10.1209/02955075/113/28007. Martin Dyrba, Frederik Barkhof, Andreas Fellgiebel, Massimo Filippi, Lucrezia Hausner, Karlheinz Hauenstein, Thomas Kirste, and Stefan J. Teipel. 2015. Predicting prodromal alzheimer’s disease in subjects with mild cognitive impairment using machine learning classification of multimodal multicenter diffusion-tensor and magnetic resonance imaging data. Journal of Neuroimaging 25(5):738– 747. https://doi.org/10.1111/jon.12214. Kathleen C. Fraser, Jed A. Meltzer, Naida L. Graham, Carol Leonard, Graeme Hirst, Sandra E. Black, and Elizabeth Rochon. 2014. Automated classification of primary progressive aphasia subtypes from narrative speech transcripts. Cortex 55:43–60. https://doi.org/10.1016/j.cortex.2012.12.006. Kathleen C. Fraser, Jed A. Meltzer, and Frank Rudzicz. 2015. Linguistic features identify alzheimer’s disease in narrative speech. Journal of Alzheimer’s Disease 49(2):407–422. https://doi.org/10.3233/JAD-150520. Peter Garrard, Vassiliki Rentoumi, Benno Gesierich, Bruce Miller, and Maria L. Gorno-Tempini. 2014. Machine learning approaches to diagnosis and laterality effects in semantic dementia discourse. Cortex 55:122–129. https://doi.org/10.1016/j.cortex.2013.05.008. Harold Goodglass, Edith Kaplan, and Barbara Barresi. 2001. The Assessment of Aphasia and Related Disorders. The Assessment of Aphasia and Related Disorders. Lippincott Williams & Wilkins. Arthur C. Graesser, Danielle S. McNamara, Max M. Louwerse, and Zhiqiang Cai. 2004. 1293 Coh-metrix: Analysis of text on cohesion and language. Behavior research methods, instruments, & computers 36(2):193–202. https://doi.org/10.3758/BF03195564. Ramon F. i Cancho, Ricard V. Sol´e, and Reinhard K¨ohler. 2004. Patterns in syntactic dependency networks. Physical Review E 69(5):051915. https://doi.org/10.1103/PhysRevE.69.051915. Ramon F. i Cancho and Richard V. Sol´e. 2001. The small world of human language. Proceedings of the Royal Society of London B: Biological Sciences 268(1482):2261–2265. https://doi.org/10.1098/rspb.2001.1800. William L. Jarrold, Bart Peintner, David Wilkins, Dimitra Vergryi, Colleen Richey, Maria L. GornoTempini, and Jennifer Ogar. 2014. Aided diagnosis of dementia type through computer-based analysis of spontaneous speech. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics Workshop on Computational Linguistics and Clinical Psychology. Association for Computational Linguistics, pages 27–36. William L. Jarrold, Bart Peintner, Eric Yeh, Ruth Krasnow, Harold S. Javitz, and Gary E. Swan. 2010. Language analytics for assessing brain health: Cognitive impairment, depression and presymptomatic alzheimer’s disease. In Yiyu Yao, Ron Sun, Tomaso Poggio, Jiming Liu, Ning Zhong, and Jimmy Huang, editors, Proceedings of International Conference on Brain Informatics (BI 2010), Springer Berlin Heidelberg, pages 299–307. https://doi.org/10.1007/978-3-642-15314-3 28. Edith Kaplan, Harold Googlass, and Sandra Weintrab. 2001. Boston naming test. Lippincott Williams & Wilkins. A. Kertesz. 1982. Western Aphasia Battery test manual. Grune & Stratton. Maider Lehr, Emily T. Prud’hommeaux, Izhak Shafran, and Brian Roark. 2012. Fully automated neuropsychological assessment for detecting mild cognitive impairment. In Proceedings of the 13th Annual Conference of the International Speech Communication Association. pages 1039–1042. Jeaneth Machicao, Edilson A. Corrˆea Jr, Gisele H. B. Miranda, Diego R. Amancio, and Odemir M. Bruno. 2016. Authorship attribution based on life-like network automata. arXiv preprint arXiv:1610.06498 . Brian MacWhinney. 2000. The CHILDES Project: Tools for analyzing talk. Lawrence Erlbaum Associates, 3 edition. Rada Mihalcea and Dragomir Radev. 2011. Graphbased natural language processing and information retrieval. Cambridge University Press. Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168 . Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Proceedings of the 27th Annual Conference on Neural Information Processing Systems. pages 3111–3119. Weerasak Muangpaisan, Chonachan Petcharat, and Varalak Srinonprasert. 2012. Prevalence of potentially reversible conditions in dementia and mild cognitive impairment in a geriatric clinic. Geriatrics & gerontology international 12(1):59–64. https://doi.org/10.1111/j.1447-0594.2011.00728.x. Sylvester O. Orimaye, Jojo Wong, and K. Jennifer Golden. 2014. Learning predictive linguistic features for alzheimer’s disease and related dementias using verbal utterances. In Proceedings of the 1st Workshop on Computational Linguistics and Clinical Psychology (CLPsych). Association for Computational Linguistics, pages 78–87. www.aclweb.org/anthology/W/W14/W14-3210. Fabian Pedregosa, Ga¨el Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research 12:2825–2830. Bryan Perozzi, Rami Al-Rfou, Vivek Kulkarni, and Steven Skiena. 2014. Inducing language networks from continuous space word representations. In Proceedings of the 5th Workshop on Complex Networks CompleNet 2014, Springer, pages 261–273. https://doi.org/10.1007/978-3-319-05401-8 25. Ronald C. Petersen. 2004. Mild cognitive impairment as a diagnostic entity. Journal of internal medicine 256(3):183–194. https://doi.org/10.1111/j.13652796.2004.01388.x. Emily T. Prud’hommeaux and Brian Roark. 2011. Alignment of spoken narratives for automated neuropsychological assessment. In Proceedings of Workshop on Automatic Speech Recognition & Understanding,ASRU. Institute of Electrical and Electronics Engineers, pages 484–489. https://doi.org/10.1109/ASRU.2011.6163979. Brian Roark, Margaret Mitchell, John-Paul Hosom, Kristy Hollingshead, and Jeffrey Kaye. 2011. Spoken language derived measures for detecting mild cognitive impairment. Transactions on Audio, Speech, and Language Processing, Institute of Electrical and Electronics Engineers 19(7):2081–2090. https://doi.org/10.1109/TASL.2011.2112351. Ranzivelle M. Roxas and Giovanni Tapang. 2010. Prose and poetry classification and boundary detec1294 tion using word adjacency network analysis. International Journal of Modern Physics C 21(04):503– 512. https://doi.org/10.1142/S0129183110015257. Eleanor M. Saffran, Rita S. Berndt, and Myrna F. Schwartz. 1989. The quantitative analysis of agrammatic production: Procedure and data. Brain and language 37(3):440–479. https://doi.org/10.1016/0093-934X(89)90030-8. Thiago C. Silva and Diego R. Amancio. 2012. Word sense disambiguation via high order of learning in complex networks. EPL (Europhysics Letters) 98(5):58001. Camila V. Teixeira, Lilian T. Gobbi, Danilla I. Corazza, Florindo Stella, Jos´e L. Costa, and Sebasti˜ao Gobbi. 2012. Non-pharmacological interventions on cognitive functions in older people with mild cognitive impairment (mci). Archives of gerontology and geriatrics 54(1):175– 180. https://doi.org/10.1016/j.archger.2011.02.014. L´aszl´o T´oth, G´abor Gosztolya, Veronika Vincze, Ildik´o Hoffmann, and Gr´eta Szatl´oczki. 2015. Automatic detection of mild cognitive impairment from spontaneous speech using asr. In Proceedings of the 16th Annual Conference of the International Speech Communication Association. International Speech and Communication Association, pages 2694–2698. Marcos V. Treviso, Christopher Shulby, and Sandra M. Alu´ısio. 2017. Sentence segmentation in narrative transcripts from neuropsycological tests using recurrent convolutional neural networks. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, pages 1– 10. https://arxiv.org/abs/1610.00211. Veronika Vincze, G´abor Gosztolya, L´aszl´o T´oth, Ildik´o Hoffmann, and Gr´eta Szatl´oczki. 2016. Detecting mild cognitive impairment by exploiting linguistic information from transcripts. In Proceedings of the 54th Annual Meeting of the Association Computer Linguistics. Association for Computational Linguistics. https://doi.org/10.18653/v1/P16-2030. Alyssa Weakley, Jennifer A. Williams, Maureen Schmitter-Edgecombe, and Diane J. Cook. 2015. Neuropsychological test selection for cognitive impairment classification: A machine learning approach. Journal of clinical and experimental neuropsychology 37(9):899–916. https://doi.org/10.1080/13803395.2015.1067290. David Wechsler et al. 1997. Wechsler memory scale (WMS-III). Psychological Corporation. Zhi-Hua Zhou. 2012. Ensemble methods: foundations and algorithms. Chapman & Hall/CRC, 1st edition. A Supplementary Material Figure 3 is Cookie Theft picture, which was used in DementiaBank project. Figure 4 is a sequence of pictures from the Cinderella story, which were used to elicit speech narratives. Figure 3: The Cookie Theft Picture, taken from the Boston Diagnostic Aphasia Examination (Goodglass et al., 2001). Figure 4: Sequence of Pictures of the of Cinderella story. A.1 Examples of transcriptions Below follows an example of a transcript of the Cookie Theft dataset. 1295 You just want me to start talking ? Well the little girl is asking her brother we ’ll say for a cookie . Now he ’s getting the cookie one for him and one for her . He unbalances the step the little stool and he ’s about to fall . And the lid ’s off the cookie jar . And the mother is drying the dishes abstractly so she ’s left the water running in the sink and it is spilling onto the floor . And there are two there ’s look like two cups and a plate on the sink and board . And that boy ’s wearing shorts and the little girl is in a short skirt . And the mother has an apron on . And she ’s standing at the window . The window ’s opened . It must be summer or spring . And the curtains are pulled back . And they have a nice walk around their house . And there ’s this nice shrubbery it appears and grass . And there ’s a big picture window in the background that has the drapes pulled off . There ’s a not pulled off but pulled aside . And there ’s a tree in the background . And the house with the kitchen has a lot of cupboard space under the sink board and under the cabinet from which the cookie you know cookies are being removed . Below follows an excerpt of a transcript of the Cinderella dataset. Original transcript in Portuguese: ela morava com a madrasta as irm˜a n´e e ela era diferenciada das trˆes era maltratada ela tinha que fazer limpeza na casa toda no castelo alias e as irm˜as n˜ao faziam nada at´e que um dia chegou um convite do rei ele ia fazer um baile e a madrasta ent˜ao ´e colocou que todas as filhas elas iam menos a cinderela bom como ela n˜ao tinha o vestido sapato as coisas tudo ent˜ao ela mesmo teve que fazer a roupa dela comec¸ou a fazer ... Translation of the transcript in English: she lived with the stepmother the sister right and she was differentiated from the three was mistreated she had to do the cleaning in the entire house actually in the castle and the sisters didn’t do anything until one day the king’s invitation arrived he would invite everyone to a ball and then the stepmother is said that all the daughters they would go except for cinderella well since she didn’t have a dress shoes all the things she had to make her own clothes she started to make them ... A.2 Coh-Metrix-Dementia metrics 1. Ambiguity: verb ambiguity, noun ambiguity, adjective ambiguity, adverb ambiguity; 2. Anaphoras: adjacent anaphoric references, anaphoric references; 3. Basic Counts: Flesch index, number of word, number of sentences, number of paragraphs, words per sentence, sentences per paragraph, syllables per content word, verb incidence, noun incidence, adjective incidence, adverb incidence, pronoun incidence, content word incidence, function word incidence; 4. Connectives: connectives incidence, additive positive connectives incidence, additive negative connectives incidence, temporal positive connectives incidence, temporal negative connectives incidence, casual positive connectives incidence, casual negative connectives incidence, logical positive connectives incidence, logical negative connectives incidence; 5. Co-reference Measures: adjacent argument overlap, argument overlap, adjacent stem overlap, stem overlap, adjacent content word overlap; 6. Content Word Frequencies: Content words frequency, minimum among content words frequency; 7. Hypernyms: Mean hypernyms per verb; 8. Logic Operators: Logic operators incidence, and incidence, or incidence, if incidence, negation incidence; 9. Latent Semantic Analysis (LSA): Average and standard deviation similarity between pairs of adjacent sentences in the text, Average and standard deviation similarity between all sentence pairs in the text, Average and standard deviation similarity between pairs of adjacent paragraphs in the text, Givenness average and standard deviation of each sentence in the text; 10. Semantic Density: content density; 11. Syntactical Complexity: only cross entropy; 12. Tokens: personal pronouns incidence, typetoken ratio, Brunet index, Honor´e Statistics. 1296
2017
118
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1297–1307 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1119 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1297–1307 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1119 Adversarial Adaptation of Synthetic or Stale Data Young-Bum Kim† Karl Stratos‡ Dongchan Kim† †Microsoft AI and Research ‡Bloomberg L. P. {ybkim, dongchan.kim}@microsoft.com [email protected] Abstract Two types of data shift common in practice are 1. transferring from synthetic data to live user data (a deployment shift), and 2. transferring from stale data to current data (a temporal shift). Both cause a distribution mismatch between training and evaluation, leading to a model that overfits the flawed training data and performs poorly on the test data. We propose a solution to this mismatch problem by framing it as domain adaptation, treating the flawed training dataset as a source domain and the evaluation dataset as a target domain. To this end, we use and build on several recent advances in neural domain adaptation such as adversarial training (Ganin et al., 2016) and domain separation network (Bousmalis et al., 2016), proposing a new effective adversarial training scheme. In both supervised and unsupervised adaptation scenarios, our approach yields clear improvement over strong baselines. 1 Introduction Spoken language understanding (SLU) systems analyze various aspects of a user query by classifying its domain, intent, and semantic slots. For instance, the query how is traffic to target in bellevue has domain PLACES, intent CHECK ROUTE TRAFFIC, and slots PLACE NAME: target and ABSOLUTE LOCATION: bellevue. We are interested in addressing two types of data shift common in SLU applications. The first data shift problem happens when we transfer from synthetic data to live user data (a deployment shift). This is also known as the “cold-start” problem; a model cannot be trained on the real usage data prior to deployment simply because it does not exist. A common practice is to generate a large quantity of synthetic training data that mimics the expected user behavior. Such synthetic data is crafted using domain-specific knowledge and can be time-consuming. It is also flawed in that it typically does not match the live user data generated by actual users; the real queries submitted to these systems are different from what the model designers expect to see. The second data shift problem happens when we transfer from stale data to current data (a temporal shift). In our use case, we have one set of training data from 2013 and wish to handle data from 2014–2016. This is problematic since the content of the user queries changes over time (e.g., new restaurant or movie names may be added). Consequently, the model performance degrades over time. Both shifts cause a distribution mismatch between training and evaluation, leading to a model that overfits the flawed training data and performs poorly on the test data. We propose a solution to this mismatch problem by framing it as domain adaptation, treating the flawed training dataset as a source domain and the evaluation dataset as a target domain. To this end, we use and build on several recent advances in neural domain adaptation such as adversarial training (Ganin et al., 2016) and domain separation network (Bousmalis et al., 2016), proposing a new adversarial training scheme based on randomized predictions. We consider both supervised and unsupervised adaptation scenarios (i.e., absence/presence of labeled data in the target domain). We find that unsupervised DA can greatly improve performance without requiring additional annotation. Super1297 vised DA with a small amount of labeled data gives further improvement on top of unsupervised DA. In experiments, we show clear gains in both deployment and temporal shifts across 5 test domains, yielding average error reductions of 74.04% and 41.46% for intent classification and 70.33% and 32.0% for slot tagging compared to baselines without adaptation. 2 Related Work 2.1 Domain Adaptation Our work builds on the recent success of DA in the neural network framework. Notably, Ganin et al. (2016) propose an adversarial training method for unsupervised DA. They partition the model parameters into two parts: one inducing domainspecific (or private) features and the other domaininvariant (or shared) features. The domaininvariant parameters are adversarially trained using a gradient reversal layer to be poor at domain classification; as a consequence, they produce representations that are domain agnostic. This approach is motivated by a rich literature on the theory of DA pioneered by Ben-David et al. (2007). We describe our use of adversarial training in Section 3.2.3. A special case of Ganin et al. (2016) is developed independently by Kim et al. (2016c) who motivate the method as a generalization of the feature augmentation method of Daum´e III (2009). Bousmalis et al. (2016) extend the framework of Ganin et al. (2016) by additionally encouraging the private and shared features to be mutually exclusive. This is achieved by minimizing the dot product between the two sets of parameters and simultaneously reconstructing the input (for all domains) from the features induced by these parameters. Both Ganin et al. (2016) and Bousmalis et al. (2016) discuss applications in computer vision. Zhang et al. (2017) apply the method of Bousmalis et al. (2016) to tackle transfer learning in NLP. They focus on transfer learning between classification tasks over the same domain (“aspect transfer”). They assume a set of keywords associated with each aspect and use these keywords to inform the learner of the relevance of each sentence for that aspect. 2.2 Spoken Language Understanding Recently, there has been much investment on the personal digital assistant (PDA) technology in industry (Sarikaya, 2015; Sarikaya et al., 2016). Apples Siri, Google Now, Microsofts Cortana, and Amazons Alexa are some examples of personal digital assistants. Spoken language understanding (SLU) is an important component of these examples that allows natural communication between the user and the agent (Tur, 2006; El-Kahky et al., 2014). PDAs support a number of scenarios including creating reminders, setting up alarms, note taking, scheduling meetings, finding and consuming entertainment (i.e. movie, music, games), finding places of interest and getting driving directions to them (Kim et al., 2016a). Naturally, there has been an extensive line of prior studies for domain scaling problems to easily scale to a larger number of domains: pretraining (Kim et al., 2015c), transfer learning (Kim et al., 2015d), constrained decoding with a single model (Kim et al., 2016a), multi-task learning (Jaech et al., 2016), neural domain adaptation (Kim et al., 2016c), domainless adaptation (Kim et al., 2016b), a sequence-to-sequence model (Hakkani-T¨ur et al., 2016), domain attention (Kim et al., 2017) and zero-shot learning(Chen et al., 2016; Ferreira et al., 2015). There are also a line of prior works on enhancing model capability and features: jointly modeling intent and slot predictions (Jeong and Lee, 2008; Xu and Sarikaya, 2013; Guo et al., 2014; Zhang and Wang, 2016; Liu and Lane, 2016a,b), modeling SLU models with web search click logs (Li et al., 2009; Kim et al., 2015a) and enhancing features, including representations (Anastasakos et al., 2014; Sarikaya et al., 2014; Celikyilmaz et al., 2016, 2010; Kim et al., 2016d) and lexicon (Liu and Sarikaya, 2014; Kim et al., 2015b). All the above works assume that there are no any data shift issues which our work try to solve. 3 Method 3.1 BiLSTM Encoder We use an LSTM simply as a mapping φ : Rd × Rd′ →Rd′ that takes an input vector x and a state vector h to output a new state vector h′ = φ(x, h). See Hochreiter and Schmidhuber (1997) for a detailed description. Let C denote the set of character types and W the set of word types. Let ⊕denote the vector concatenation operation. We encode an utterance using the wildly successful architecture given by bidirectional LSTMs (BiLSTMs) (Schuster and 1298 Paliwal, 1997; Graves, 2012). The model parameters Θ associated with this BiLSTM layer are • Character embedding ec ∈R25 for each c ∈ C • Character LSTMs φC f, φC b : R25×R25 →R25 • Word embedding ew ∈R100 for each w ∈W • Word LSTMs φW f , φW b : R150×R100 →R100 Let w1 . . . wn ∈W denote a word sequence where word wi has character wi(j) ∈C at position j. First, the model computes a character-sensitive word representation vi ∈R150 as fC j = φC f ewi(j), fC j−1  ∀j = 1 . . . |wi| bC j = φC b ewi(j), bC j+1  ∀j = |wi| . . . 1 vi = fC |wi| ⊕bC 1 ⊕ewi for each i = 1 . . . n.1 Next, the model computes fW i = φW f vi, fW i−1  ∀i = 1 . . . n bW i = φW b vi, bW i+1  ∀i = n . . . 1 and induces a character- and context-sensitive word representation hi ∈R200 as hi = fW i ⊕bW i (1) for each i = 1 . . . n. For convenience, we write the entire operation as a mapping BiLSTMΘ: (h1 . . . hn) ←BiLSTMΘ(w1 . . . wn) 3.2 Unsupervised DA In unsupervised domain adaptation, we assume labeled data for the source domain but not the target domain. Our approach closely follows the previous work on unsupervised neural domain adaptation by Ganin et al. (2016) and Bousmalis et al. (2016). We have three BiLSTM encoders described in Section 3.1: 1. Θsrc: induces source-specific features 2. Θtgt: induces target-specific features 3. Θshd: induces domain-invariant features We now define a series of loss functions defined by these encoders. 1For simplicity, we assume some random initial state vectors such as f C 0 and bC |wi|+1 when we describe LSTMs. 3.2.1 Source Side Tagging Loss The most obvious objective is to minimize the model’s error on labeled training data for the source domain. Let w1 . . . wn ∈W be an utterance in the source domain annotated with labels y1 . . . yn ∈L. We induce (hsrc 1 . . . hsrc n ) ←BiLSTMΘsrc(w1 . . . wn) (hshd 1 . . . hshd n ) ←BiLSTMΘshd(w1 . . . wn) Then we define the probability of tag y ∈L for the i-th word as zi = W 2 tag tanh W 1 tag¯hi + b1 tag  + b2 tag p(y|hi) ∝exp ([zi]y) where ¯hi = hsrc i ⊕hshd i and Θtag = {W 1 tag, W 2 tag, b1 tag, b2 tag} denotes additional feedfoward parameters. The tagging loss is given by the negative log likelihood Ltag (Θsrc, Θshd, Θtag) = − X i log p yi|¯hi  where we iterate over annotated words (wi, yi) on the source side. 3.2.2 Reconstruction Loss Following previous works, we ground feature learning by reconstructing encoded utterances. Both Bousmalis et al. (2016) and Zhang et al. (2017) use mean squared errors for reconstruction, the former of image pixels and the latter of words in a context window. In contrast, we use an attention-based LSTM that fully re-generates the input utterance and use its log loss. More specifically, let w1 . . . wn ∈W be an utterance in domain d ∈{src, tgt}. We first use the relevant encoders as before (hd 1 . . . hd n) ←BiLSTMΘd(w1 . . . wn) (hshd 1 . . . hshd n ) ←BiLSTMΘshd(w1 . . . wn) The concatenated vectors ¯hi = hd i ⊕hshd i are fed into the standard attention-based decoder (Bahdanau et al., 2014) to define the probability of word w at each position i with state vector µi−1 (where µ0 = ¯hn): αj ∝exp  µ⊤ i−1¯hj  ∀j ∈{1 . . . n} ˜hi = n X j=1 αj¯hj µi = φR(µi−1 ⊕˜hi, µi−1) p(w|µi) ∝exp [W 1 recµi + b1 rec]w  1299 where Θrec = {φR, W 1 rec, b1 rec} denotes additional parameters. The reconstruction loss is given by the negative log likelihood Lrec (Θsrc, Θtgt, Θshd, Θrec) = − X i log p (wi|µi) where we iterate over words wi in both the source and target utterances. 3.2.3 Adversarial Domain Classification Loss Ganin et al. (2016) propose introducing an adversarial loss to make shared features domaininvariant. This is motivated by a theoretical result of Ben-David et al. (2007) who show that the generalization error on the target domain depends on how “different” the source and the target domains are. This difference is approximately measured by 2  1 −2 inf Θ error(Θ)  (2) where error(Θ) is the domain classification error using model Θ. It is assumed that the source and target domains are balanced so that infΘ error(Θ) ≤1/2 and the difference lies in [0, 2]. In other words, we want to make error(Θ) as large as possible in order to generalize well to the target domain. The intuition is that the more domain-invariant our features are, the easier it is to benefit from the source side training when testing on the target side. It can also be motivated as a regularization term (Ganin et al., 2016). Let w1 . . . wn ∈W be an utterance in domain d ∈{src, tgt}. We first use the shared encoder (hshd 1 . . . hshd n ) ←BiLSTMΘshd(w1 . . . wn) It is important that we only use the shared encoder for this loss. Then we define the probability of domain d for the utterance as zi = W 2 adv tanh W 1 adv n X i=1 hshd i + b1 adv ! + b2 adv p(d|hi) ∝exp ([zi]d) where Θadv = {W 1 adv, W 2 adv, b1 adv, b2 adv} denotes additional feedfoward parameters. The adversarial domain classification loss is given by the positive log likelihood Ladv (Θshd, Θadv) = X i log p  d(i)|w(i) where we iterate over domain-annotated utterances (w(i), d(i)). Random prediction training While past work only consider using a negative gradient (Ganin et al., 2016; Bousmalis et al., 2016) or positive log likelihood (Zhang et al., 2017) to perform adversarial training, it is unclear whether these approaches are optimal for the purpose of “confusing” the domain predictor. For instance, minimizing log likelihood can lead to a model accurately predicting the opposite domain, compromising the goal of inducing domain-invariant representations. Thus we propose to instead optimize the shared parameters for random domain predictions. Specifically, the above loss is replaced with Ladv (Θshd, Θadv) = − X i log p  d(i)|w(i) where d(i) is set to be src with probability 0.5 and tgt with probability 0.5. By optimizing for random predictions, we achieve the desired effect: the shared parameters are trained to induce features that cannot discriminate between the source and the target domains. 3.2.4 Non-Adversarial Domain Classification Loss In addition to the adversarial loss for domaininvariant parameters, we also introduce a nonadversarial loss for domain-specific parameters. Given w1 . . . wn ∈W in domain d ∈{src, tgt}, we use the private encoder (hd 1 . . . hd n) ←BiLSTMΘd(w1 . . . wn) It is important that we only use the private encoder for this loss. Then we define the probability of domain d for the utterance as zi = W 2 nadv tanh W 1 nadv n X i=1 hd i + b1 nadv ! + b2 nadv p(d|hi) ∝exp ([zi]d) where Θnadv = {W 1 nadv, W 2 nadv, b1 nadv, b2 nadv} denotes additional feedfoward parameters. The nonadversarial domain classification loss is given by the negative log likelihood Lnadv (Θd, Θnadv) = X i log p  d(i)|w(i) where we iterate over domain-annotated utterances (w(i), d(i)). 1300 3.2.5 Orthogonality Loss Finally, following Bousmalis et al. (2016), we further encourage the domain-specific features to be mutually exclusive with the shared features by imposing soft orthogonality constraints. This is achieved as follows. Given an utterance w1 . . . wn ∈W in domain d ∈{src, tgt}. We compute (hd 1 . . . hd n) ←BiLSTMΘd(w1 . . . wn) (hshd 1 . . . hshd n ) ←BiLSTMΘshd(w1 . . . wn) The orthogonality loss for this utterance is given by Lorth (Θsrc, Θtgt, Θshd) = X i (hd i )⊤hshd i where we iterate over words i in both the source and target utterances. 3.2.6 Joint Objective For unsupervised DA, we optimize Lunsup (Θsrc, Θtgt, Θshd, Θtag, Θrec, Θadv) = Ltag (Θsrc, Θshd, Θtag) + Lrec (Θsrc, Θtgt, Θshd, Θrec) + Ladv (Θshd, Θadv) + Lnadv (Θsrc, Θnadv) + Lnadv (Θtgt, Θnadv) + Lorth (Θsrc, Θtgt, Θshd) with respect to all model parameters. In an online setting, given an utterance we compute its reconstruction, adversarial, orthogonality, and tagging loss if in the source domain, and take a gradient step on the sum of these losses. 3.3 Supervised DA In supervised domain adaptation, we assume labeled data for both the source domain and the target domain. We can easily incorporate supervision in the target domain by adding Ltag (Θtgt, Θshd, Θtag) to the unsupervised DA objective: Lsup (Θsrc, Θtgt, Θshd, Θtag, Θrec, Θadv) = Lunsup (Θsrc, Θtgt, Θshd, Θtag, Θrec, Θadv) + Ltag (Θtgt, Θshd, Θtag) (3) We mention that the approach by Kim et al. (2016c) is a special case of this objective; they optimize Lsup2 (Θsrc, Θtgt, Θshd, Θtag) =Ltag (Θsrc, Θshd, Θtag) + Ltag (Θtgt, Θshd, Θtag) (4) which is motivated as a neural extension of the feature augmentation method of Daum´e III (2009). 4 Experiments In this section, we conducted a series of experiments to evaluate the proposed techniques on datasets obtained from real usage. 4.1 Test Domains and Tasks We test our approach on a suite of 5 Microsoft Cortana domains with 2 separate tasks in spoken language understanding: (1) intent classification and (2) slot (label) tagging. The intent classification task is a multi-class classification problem with the goal of determining to which one of the n intents a user utterance belongs conditioning on the given domain. The slot tagging task is a sequence labeling problem with the goal of identifying entities and chunking of useful information snippets in a user utterance. For example, a user could say reserve a table at joeys grill for thursday at seven pm for five people. Then the goal of the first task would be to classify this utterance as MAKE RESERVATION intent given the domain PLACES, and the goal of the second task would be to tag joeys grill as RESTAURANT, thursday as DATE, seven pm as TIEM, and five as NUMBER PEOPLE. Table 1 gives a summary of the 5 test domains. We note that the domains have various levels of label granularity. Domain Intent Slot Description calendar 23 43 Set appointments in calendar comm. 38 45 Make calls & send messages places 35 64 Find locations & directions reminder 14 35 Remind tasks in a to-do list weather 13 19 Get weather information Table 1: The number of intents, the number of slots and a short description of the test domains. 4.2 Experimental Setup We consider 2 possible domain adaptation (DA) scenarios: (1) adaptation of an engineered dataset to a live user dataset and (2) adaptation of an old 1301 dataset to a new dataset. For the first DA scenario, we test whether our approach can effectively make a system adapt from experimental, engineered data to real-world, live data. We use synthetic data which domain experts manually create based on a given domain schema2 before the system goes live as the engineered data. We use transcribed dataset from users’ speech input as the live user data. For the second scenario, we test whether our approach can effectively make a system adapt over time. A large number of users will quickly generate a large amount of data, and the usage pattern could also change. We use annotation data over 1 month in 2013 (more precisely August of 2013) as our old dataset, and use the whole data between 2014 and 2016 as our new dataset regardless of whether the data type is engineered or live user. As we describe in the earlier sections, we consider both supervised and unsupervised DA. We apply our DA approach with labeled data in the target domain for the supervised setting and with unlabeled data for the unsupervised one. We give details of the baselines and variants of our approach below. Unsupervised DA baselines and variants: • SRC: a single LSTM model trained on a source domain without DA techniques • DAW : an unsupervised DA model with a word-level decoder (i.e., re-generate each word independently) • DAS: an unsupervised DA model with a sentence-level decoder described in Section 3.2 Supervised DA baselines and variants: • SRC: a single LSTM model trained only on a source domain • TGT: a single LSTM model trained only on a target domain • Union: a single LSTM model trained on the union of source and target domains. • DA: a supervised DA model described in Section 3.3 • DAA: DA with adversary domain training 2This is a semantic template that defines a set of intents and slots for each domain according to the intended functionality of the system. • DAU: DA with reasonably sufficient unlabeled data In our experiments, all the models were implemented using Dynet (Neubig et al., 2017) and were trained using Stochastic Gradient Descent (SGD) with Adam (Kingma and Ba, 2015)—an adaptive learning rate algorithm. We used the initial learning rate of 4 × 10−4 and left all the other hyper parameters as suggested in Kingma and Ba (2015). Each SGD update was computed without a minibatch with Intel MKL (Math Kernel Library)3. We used the dropout regularization (Srivastava et al., 2014) with the keep probability of 0.4. We encode user utterances with BiLSTMs as described in Section 3.1. We initialize word embeddings with pre-trained embeddings used by Lample et al. (2016). In the following sections, we report intent classification results in accuracy percentage and slot results in F1-score. To compute slot F1-score, we used the standard CoNLL evaluation script4 4.3 Results: Unsupervised DA We first show our results in the unsupervised DA setting where we have a labeled dataset in the source domain, but only unlabeled data in the target domain. We assume that the amount of data in both datasets is sufficient. Dataset statistics are shown in Table 2. The performance of the baselines and our model variants are shown in Table 3. The left side of the table shows the results of the DA scenario of adapting from engineered data to live user data, and the baseline which trained only on the source domain (SRC) show a poor performance, yielding on average 48.5% on the intent classification and 42.7% F1-score on the slot tagging. Using our DA approach with a word-level decoder (DAW ) shows a significant increase in performance in all 5 test domains, yielding on average 82.2% intent accuracy and 80.5% slot F1-score. The performance increases further using the DA approach with a sentence-level decoder DAS, yielding on average 85.6% intent accuracy and 83.0% slot F1-score. The right side of the table shows the results of the DA scenario of adapting from old to new data, and the baseline trained only on SRC also show 3https://software.intel.com/en-us/articles/intelr-mkl-andc-template-libraries 4http://www.cnts.ua.ac.be/conll2000/chunking/output.html 1302 Engineered → Live User Old → New Domain Train Train* Dev Test Train Train* Dev Test calendar 16904 50000 1878 10k 13165 13165 1463 10k communication 32072 50000 3564 10k 12631 12631 1403 10k places 23410 50000 2341 10k 21901 21901 2433 10k reminder 19328 50000 1933 10k 16245 16245 1805 10k weather 20794 50000 2079 10k 15575 15575 1731 10k AVG 23590 50000 2359 10k 15903 15903 1767 10k Table 2: Data statistics for unsupervised domain adaptation; In the first row, the columns are adaptation of engineered dataset to live user dataset, and and adaptation of old dataset to new dataset. In the second row, columns are domain, size of labeled training, unlabeled training, development and test sets. * denotes unlabeled data Engineered →User Live Old →New Task Domain SRC DAW DAS SRC DAW DAS Intent calendar 47.5 82.0 84.6 50.7 85.7 88.8 communication 45.8 75.3 81.2 49.4 83.2 86.2 places 48.5 83.7 86.3 51.7 88.1 91.1 reminder 50.7 83.9 88.7 53.3 88.8 92.8 weather 50.3 86.3 87.1 53.4 89.1 92.2 AVG 48.5 82.2 85.6 51.7 86.9 90.2 Slot calendar 42.4 79.4 81.7 42.2 84.7 87.9 communication 41.1 75.3 79.1 41.5 85.3 89.1 places 40.2 81.6 83.8 44.1 85.4 88.7 reminder 42.6 83.5 85.7 47.4 87.6 91.2 weather 47.2 82.8 84.7 43.2 85.6 89.5 AVG 42.7 80.5 83.0 43.7 85.7 89.3 Table 3: Intent classification accuracy (%) and slot tagging F1-score (%) for the unsupervised domain adaptation. The results that perform in each domain are in bold font. Engineered → Live User Old → New Domain Train Train* Dev Test Train Train* Dev Test calendar 16904 1000 100 10k 13165 1000 100 10k communication 32072 1000 100 10k 12631 1000 100 10k places 23410 1000 100 10k 21901 1000 100 10k reminder 19328 1000 100 10k 16245 1000 100 10k weather 20794 1000 100 10k 15575 1000 100 10k AVG 23590 1000 100 10k 15903 1000 100 10k Table 4: Data statistics for supervised domain adaptation Engineered →User Live Old →New Domain SRC TGT Union DA DAA DAU SRC TGT Union DA DAA DAU I calendar 47.5 69.2 48.3 80.7 80.5 82.4 50.7 69.2 49.9 74.4 75.4 75.8 comm. 45.8 67.4 47.0 77.5 78.0 79.7 49.4 65.8 50.0 70.2 70.7 71.9 places 48.5 71.2 48.5 82.0 82.4 83.2 51.7 69.6 52.2 75.8 76.4 77.3 reminder 50.7 75.0 49.9 83.9 84.1 87.3 53.3 72.3 53.9 77.2 78.0 78.5 weather 50.3 73.8 49.6 84.3 84.7 85.6 53.4 71.4 52.7 76.9 78.1 79.2 AVG 48.5 71.3 48.7 81.7 81.9 83.6 51.7 69.7 51.7 74.9 75.7 76.5 S calendar 42.4 64.9 43.0 76.1 76.7 77.1 42.2 61.8 41.6 68.0 66.9 69.3 comm. 41.1 62.0 40.4 73.3 72.1 73.8 41.5 61.1 44.9 67.2 66.3 68.4 places 40.2 61.8 39.0 72.1 72.0 72.9 44.1 64.6 47.7 70.1 68.7 72.5 reminder 42.6 65.1 42.6 76.8 75.7 80.0 47.4 70.9 44.2 78.4 76.2 78.9 weather 47.2 71.2 46.4 82.6 83.0 84.4 43.2 64.1 44.7 71.0 69.0 70.2 AVG 42.7 65.0 42.3 76.2 75.9 77.6 43.7 64.5 44.6 71.0 69.4 71.9 Table 5: Intent classification accuracy (%) and slot tagging F1-score (%) for the supervised domain adaptation. a similar poor performance, yielding on average 51.7% accuracy and 43.7% F1-score. DAW approach shows a significant performance increase in all 5 test domains, yielding on average 86.9% intent accuracy and 85.7% slot F1-score. Similarly, the performance increases further with the DAS with 90.2% intent accuracy and 89.3% F1score. 1303 Our DA approach variants yield average error reductions of 72.04% and 79.71% for intent classification and 70.33% and 80.99% for slot tagging. The results suggest that our DA approach can quickly make a model adapt from synthetic data to real-world data and from old data to new data with the additional use of only 2 to 2.5 more data from the target domain. Aside from the performance boost itself, the approach shows even more power since the new data from the target down do not need to be labeled and it only requires collecting a little more data from the target domain. We note that the model development sets were created only from the source domain for a fully unsupervised setting. But having the development set from the target domain shows even more boost in performance although not shown in the results, and labeling only the development set from the target domain is relatively less expensive than labeling the whole dataset. 4.4 Results: Supervised DA Second, we show our results in the supervised DA setting where we have a sufficient amount of labeled data in the source domain but relatively insufficient amount of labeled data in the target domain. Having more labeled data in the target domain would most likely help with the performance, but we intentionally made the setting more disadvantageous for our DA approach to better simulate real-world scenarios where there is usually lack of resources and time to label a large amount of new data. For each personal assistant test domain, we only used 1000 training utterances to simulate scarcity of newly labeled data, and dataset statistics are shown in Table 2. Unlike the unsupervised DA scenario, here we used the development sets created from the target domain shown in Table 4. The left side of Table 5 shows the results of the supervised DA approach of adapting from engineered data to live user data. The baseline trained only on the source (SRC) shows on average 48.5% intent accuracy and 42.7% slot F1-score. Training only on the target domain (TGT) increases the performance to 71.3% and 65.0%, but training on the union of the source and target domains (Union) again brings the performance down to 48.7% and 42.3%. As shown in the unsupervised setting, using our DA approach (DA) shows significant performance increase in all 5 test domains, yielding on average 81.7% intent accuracy and 76.2% slot tagging. The DA approach with adversary domain training (DAA) shows similar performance compared to that of DA, and performance shows more increase when using our DA approach with sufficient unlabeled data5 (DAU), yielding on average 83.6% and 77.6%. For the second scenario of adapting from old to new dataset, the results show a very similar trend in performance. The results show that our supervised DA (DA) approach also achieves a significant performance gain in all 5 test domains, yielding average error reductions of 68.18% and 51.35% for intent classification and 60.90% and 50.09% for slot tagging. The results suggest that an effective domain adaptation can be done using the supervised DA by having only a handful more data of 1k newly labeled data points. In addition, having both a small amount of newly labeled data combined with sufficient unlabeled data can help the models perform even better. The poor performance of using the union of both source and target domain data might be due to the relatively very small size of the target domain data, overwhelmed by the data in the source domain. 4.5 Results: Adversarial Domain Classification Loss Eng. →User Task Domain RAND ADV Intent calendar 84.6 81.1 communication 81.2 77.9 places 86.3 83.5 reminder 88.7 85.8 weather 87.1 84.2 AVG 85.6 82.5 Slot calendar 81.7 78.7 communication 79.1 75.7 places 83.8 80.6 reminder 85.7 82.4 weather 84.7 81.7 AVG 83.0 79.8 Table 6: Intent classification accuracy (%) and slot tagging F1-score (%) for the unsupervised domain adaptation with two different adversarial classification losses – our claimed random domain predictions (RAND) and adversarial loss (ADVR) of Ganin et al. (2016) as explained in 3.2.3. 5This data is used for unsupervised DA experiments (Table 2). 1304 The impact on the performance of two different adversarial classification losses are shown in Table 6. RAND represents the unsupervised DA model with sentence-level decoder (DAS) using random prediction loss. The ADV shows the performance of same model using the adversarial loss of Ganin et al. (2016) as described in 3.2.3. Unfortunately, in the deployment shift scenario, using the adversarial loss fails to provide any improvement on intent classification accuracy and slot tagging F1 score, achieving 82.5% intent accuracy and 79.8% slot F1 score. These results align with our hypothesis that the adversarial loss using does not confuse the classifier sufficiently. 4.6 Proxy A-distance Eng. →User Old →New Domain dA dA calendar 0.58 0.43 comm. 0.54 0.44 places 0.68 0.62 reminder 0.54 0.57 weather 0.57 0.54 AVG 0.58 0.52 Table 7: Proxy A-distance of resulting models: (1) engineered and live user dataset and (2) old and new dataset. The results in shown in Table 7 show Proxy Adistance(Ganin et al., 2016) to check if our adversary domain training generalize well to the target domain. The distance between two datasets is computed by ˆdA = 2(1 −2 min {ε, 1 −ε}) (5) where ε is a generalization error in discriminating between the source and target datasets. The range of ˆdA distance is between 0 and 2.0. 0 is the best case where adversary training successfully fake shared encoder to predict domains. In other words, thanks to adversary training our model make the domain-invariant features in shared encoder in order to generalize well to the target domain. 4.7 Vocabulary distance between engineered data and live user data The results in shown in Table 8 show the discrepancy between two datasets. We measure the degree of overlap between vocabulary V employed Eng. →User Old →New Domain dV dV calendar 0.80 0.72 comm. 0.80 0.93 places 0.82 0.72 reminder 0.89 0.71 weather 0.72 0.73 AVG 0.80 0.76 Table 8: Distance between different datasets: (1) engineered and live user dataset and (2) old and new dataset. by the two datasets. We simply take the Jaccard coefficient between the two sets of such vocabulary: dV (s, t) = 1 −JC(Vs, Vt), where Vs is the set of vocabulary in source s domain, and Vt is the corresponding set for target t domain and JC(A, B) = |A∩B| |A∪B| is the Jaccard coefficient, measuring the similarity of two sets. The distance dV is the high it means that they are not shared with many words. Overall, the distance between old and new dataset are still far and the number of overlapped are small, but better than live user case. 5 Conclusion In this paper, we have addressed two types of data shift common in SLU applications: 1. transferring from synthetic data to live user data (a deployment shift), and 2. transferring from stale data to current data (a temporal shift). Our method is based on domain adaptation, treating the flawed training dataset as a source domain and the evaluation dataset as a target domain. We use and build on several recent advances in neural domain adaptation such as adversarial training and domain separation network, proposing a new effective adversarial training scheme based on randomized predictions. In both supervised and unsupervised adaptation scenarios, our approach yields clear improvement over strong baselines. 1305 References Tasos Anastasakos, Young-Bum Kim, and Anoop Deoras. 2014. Task specific continuous word representations for mono and multi-lingual spoken language understanding. In Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on. IEEE. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 . Shai Ben-David, John Blitzer, Koby Crammer, Fernando Pereira, et al. 2007. Analysis of representations for domain adaptation. Advances in neural information processing systems 19:137. Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan. 2016. Domain separation networks. In Advances in Neural Information Processing Systems. pages 343– 351. Asli Celikyilmaz, Ruhi Sarikaya, Minwoo Jeong, and Anoop Deoras. 2016. An empirical investigation of word class-based features for natural language understanding. IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP) 24(6). Asli Celikyilmaz, Silicon Valley, and Dilek HakkaniTur. 2010. Convolutional neural network based semantic tagging with entity embeddings. genre . Yun-Nung Chen, Dilek Hakkani-T¨ur, and Xiaodong He. 2016. Zero-shot learning of intent embeddings for expansion by convolutional deep structured semantic models. In Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on. IEEE. Hal Daum´e III. 2009. Frustratingly easy domain adaptation. arXiv preprint arXiv:0907.1815 . Ali El-Kahky, Derek Liu, Ruhi Sarikaya, Gokhan Tur, Dilek Hakkani-Tur, and Larry Heck. 2014. Extending domain coverage of language understanding systems via intent transfer between domains using knowledge graphs and search query click logs. IEEE, Proceedings of the ICASSP. Emmanuel Ferreira, Bassam Jabaian, and Fabrice Lef`evre. 2015. Zero-shot semantic parser for spoken language understanding. In Sixteenth Annual Conference of the International Speech Communication Association. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Franc¸ois Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. Journal of Machine Learning Research 17(59):1–35. Alex Graves. 2012. Neural networks. In Supervised Sequence Labelling with Recurrent Neural Networks, Springer. Daniel Guo, Gokhan Tur, Wen-tau Yih, and Geoffrey Zweig. 2014. Joint semantic utterance classification and slot filling with recursive neural networks. In Spoken Language Technology Workshop (SLT), 2014 IEEE. IEEE. Dilek Hakkani-T¨ur, Gokhan Tur, Asli Celikyilmaz, Yun-Nung Chen, Jianfeng Gao, Li Deng, and YeYi Wang. 2016. Multi-domain joint semantic frame parsing using bi-directional rnn-lstm. In Proceedings of The 17th Annual Meeting of the International Speech Communication Association. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8). Aaron Jaech, Larry Heck, and Mari Ostendorf. 2016. Domain adaptation of recurrent neural networks for natural language understanding. arXiv preprint arXiv:1604.00117 . Minwoo Jeong and Gary Geunbae Lee. 2008. Triangular-chain conditional random fields. IEEE Transactions on Audio, Speech, and Language Processing 16(7). Young-Bum Kim, Minwoo Jeong, Karl Stratos, and Ruhi Sarikaya. 2015a. Weakly supervised slot tagging with partially labeled sequences from web search click logs. In Proceedings of the NAACL. Association for Computational Linguistics. Young-Bum Kim, Alexandre Rochette, and Ruhi Sarikaya. 2016a. Natural language model reusability for scaling to different domains. Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics . Young-Bum Kim, Karl Stratos, and Dongchan Kim. 2017. Domain attention with an ensemble of experts. In Annual Meeting of the Association for Computational Linguistics. Young-Bum Kim, Karl Stratos, Xiaohu Liu, and Ruhi Sarikaya. 2015b. Compact lexicon selection with spectral methods. In Proc. of Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Young-Bum Kim, Karl Stratos, and Ruhi Sarikaya. 2015c. Pre-training of hidden-unit crfs. In Proc. of Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Young-Bum Kim, Karl Stratos, and Ruhi Sarikaya. 2016b. Domainless adaptation by constrained decoding on a schema lattice. Proceedings of the 26th International Conference on Computational Linguistics (COLING) . 1306 Young-Bum Kim, Karl Stratos, and Ruhi Sarikaya. 2016c. Frustratingly easy neural domain adaptation. Proceedings of the 26th International Conference on Computational Linguistics (COLING) . Young-Bum Kim, Karl Stratos, and Ruhi Sarikaya. 2016d. Scalable semi-supervised query classification using matrix sketching. In The 54th Annual Meeting of the Association for Computational Linguistics. page 8. Young-Bum Kim, Karl Stratos, Ruhi Sarikaya, and Minwoo Jeong. 2015d. New transfer learning techniques for disparate label sets. ACL. Association for Computational Linguistics . Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. The International Conference on Learning Representations (ICLR). . Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360 . Xiao Li, Ye-Yi Wang, and Alex Acero. 2009. Extracting structured information from user queries with semi-supervised conditional random fields. In Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval. Bing Liu and Ian Lane. 2016a. Attention-based recurrent neural network models for joint intent detection and slot filling. In Interspeech 2016. Bing Liu and Ian Lane. 2016b. Joint online spoken language understanding and language modeling with recurrent neural networks. In Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue. Association for Computational Linguistics, Los Angeles. Xiaohu Liu and Ruhi Sarikaya. 2014. A discriminative model based entity dictionary weighting approach for spoken language understanding. In Spoken Language Technology Workshop (SLT). IEEE, pages 195–199. Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, et al. 2017. Dynet: The dynamic neural network toolkit. arXiv preprint arXiv:1701.03980 . Ruhi Sarikaya. 2015. The technology powering personal digital assistants. Keynote at Interspeech, Dresden, Germany. Ruhi Sarikaya, Asli Celikyilmaz, Anoop Deoras, and Minwoo Jeong. 2014. Shrinkage based features for slot tagging with conditional random fields. In INTERSPEECH. Ruhi Sarikaya, Paul Crook, Alex Marin, Minwoo Jeong, Jean-Philippe Robichaud, Asli Celikyilmaz, Young-Bum Kim, Alexandre Rochette, Omar Zia Khan, Xiuahu Liu, et al. 2016. An overview of endto-end language understanding and dialog management for personal digital assistants. In IEEE Workshop on Spoken Language Technology. Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing 45(11). Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15(1). Gokhan Tur. 2006. Multitask learning for spoken language understanding. In In Proceedings of the ICASSP. Toulouse, France. Puyang Xu and Ruhi Sarikaya. 2013. Convolutional neural network based triangular crf for joint intent detection and slot filling. In Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop on. IEEE. Xiaodong Zhang and Houfeng Wang. 2016. A joint model of intent determination and slot filling for spoken language understanding. IJCAI. Yuan Zhang, Regina Barzilay, and Tommi Jaakkola. 2017. Aspect-augmented adversarial networks for domain adaptation. arXiv preprint arXiv:1701.00188 . 1307
2017
119
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 123–135 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1012 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 123–135 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1012 A Convolutional Encoder Model for Neural Machine Translation Jonas Gehring, Michael Auli, David Grangier, Yann N. Dauphin Facebook AI Research Abstract The prevalent approach to neural machine translation relies on bi-directional LSTMs to encode the source sentence. We present a faster and simpler architecture based on a succession of convolutional layers. This allows to encode the source sentence simultaneously compared to recurrent networks for which computation is constrained by temporal dependencies. On WMT’16 EnglishRomanian translation we achieve competitive accuracy to the state-of-the-art and on WMT’15 English-German we outperform several recently published results. Our models obtain almost the same accuracy as a very deep LSTM setup on WMT’14 English-French translation. We speed up CPU decoding by more than two times at the same or higher accuracy as a strong bidirectional LSTM.1 1 Introduction Neural machine translation (NMT) is an end-to-end approach to machine translation (Sutskever et al., 2014). The most successful approach to date encodes the source sentence with a bi-directional recurrent neural network (RNN) into a variable length representation and then generates the translation left-to-right with another RNN where both components interface via a soft-attention mechanism (Bahdanau et al., 2015; Luong et al., 2015a; Bradbury and Socher, 2016; Sennrich et al., 2016a). Recurrent networks are typically parameterized as long short term memory networks (LSTM; Hochreiter et al. 1997) or gated recurrent units (GRU; Cho et al. 2014), often with residual or skip connections (Wu et al., 2016; Zhou et al., 2016) to enable stacking of several layers (§2). There have been several attempts to use convolutional encoder models for neural machine trans1The source code will be availabe at https://github. com/facebookresearch/fairseq lation in the past but they were either only applied to rescoring n-best lists of classical systems (Kalchbrenner and Blunsom, 2013) or were not competitive to recurrent alternatives (Cho et al., 2014a). This is despite several attractive properties of convolutional networks. For example, convolutional networks operate over a fixed-size window of the input sequence which enables the simultaneous computation of all features for a source sentence. This contrasts to RNNs which maintain a hidden state of the entire past that prevents parallel computation within a sequence. A succession of convolutional layers provides a shorter path to capture relationships between elements of a sequence compared to RNNs.2 This also eases learning because the resulting tree-structure applies a fixed number of non-linearities compared to a recurrent neural network for which the number of non-linearities vary depending on the time-step. Because processing is bottom-up, all words undergo the same number of transformations, whereas for RNNs the first word is over-processed and the last word is transformed only once. In this paper we show that an architecture based on convolutional layers is very competitive to recurrent encoders. We investigate simple average pooling as well as parameterized convolutions as an alternative to recurrent encoders and enable very deep convolutional encoders by using residual connections (He et al., 2015; §3). We experiment on several standard datasets and compare our approach to variants of recurrent encoders such as uni-directional and bi-directional LSTMs. On WMT’16 English-Romanian translation we achieve accuracy that is very competitive to the current state-of-the-art result. We perform competitively on WMT’15 English-German, and nearly match the performance of the best WMT’14 English-French system based on a deep LSTM setup when comparing on a commonly used subset 2For kernel width k and sequence length n we require max  1, l n−1 k−1 m forwards on a succession of stacked convolutional layers compared to n forwards with an RNN. 123 of the training data (Zhou et al. 2016; §4, §5). 2 Recurrent Neural Machine Translation The general architecture of the models in this work follows the encoder-decoder approach with soft attention first introduced in (Bahdanau et al., 2015). A source sentence x = (x1, . . . , xm) of m words is processed by an encoder which outputs a sequence of states z = (z1. . . . , zm). The decoder is an RNN network that computes a new hidden state si+1 based on the previous state si, an embedding gi of the previous target language word yi, as well as a conditional input ci derived from the encoder output z. We use LSTMs (Hochreiter and Schmidhuber, 1997) for all decoder networks whose state si comprises of a cell vector and a hidden vector hi which is output by the LSTM at each time step. We input ci into the LSTM by concatenating it to gi. The translation model computes a distribution over the V possible target words yi+1 by transforming the LSTM output hi via a linear layer with weights Wo and bias bo: p(yi+1|y1, . . . , yi, x) = softmax(Wohi+1 + bo) The conditional input ci at time i is computed via a simple dot-product style attention mechanism (Luong et al., 2015a). Specifically, we transform the decoder hidden state hi by a linear layer with weights Wd and bd to match the size of the embedding of the previous target word gi and then sum the two representations to yield di. Conditional input ci is a weighted sum of attention scores ai ∈Rm and encoder outputs z. The attention scores ai are determined by a dot product between hi with each zj, followed by a softmax over the source sequence: di = Wdhi + bd + gi, aij = exp dT i zj  Pm t=1 exp dT i zt , ci = m X j=1 aijzj In preliminary experiments, we did not find the MLP attention of (Bahdanau et al., 2015) to perform significantly better in terms of BLEU nor perplexity. However, we found the dot-product attention to be more favorable in terms of training and evaluation speed. We use bi-directional LSTMs to implement recurrent encoders similar to (Zhou et al., 2016) which achieved some of the best WMT14 EnglishFrench results reported to date. First, each word of the input sequence x is embedded in distributional space resulting in e = (e1, . . . , em). The embeddings are input to two stacks of uni-directional RNNs where the output of each layer is reversed before being fed into the next layer. The first stack takes the original sequence while the second takes the reversed input sequence; the output of the second stack is reversed so that the final outputs of the stacks align. Finally, the top-level hidden states of the two stacks are concatenated and fed into a linear layer to yield z. We denote this encoder architecture as BiLSTM. 3 Non-recurrent Encoders 3.1 Pooling Encoder A simple baseline for non-recurrent encoders is the pooling model described in (Ranzato et al., 2015) which simply averages the embeddings of k consecutive words. Averaging word embeddings does not convey positional information besides that the words in the input are somewhat close to each other. As a remedy, we add position embeddings to encode the absolute position of each source word within a sentence. Each source embedding ej therefore contains a position embedding lj as well as the word embedding wj. Position embeddings have also been found helpful in memory networks for question-answering and language modeling (Sukhbaatar et al., 2015). Similar to the recurrent encoder (§2), the attention scores aij are computed from the pooled representations zj, however, the conditional input ci is a weighted sum of the embeddings ej, not zj, i.e., ej = wj + lj, zj = 1 k ⌊k/2⌋ X t=−⌊k/2⌋ ej+t, ci = m X j=1 aijej The input sequence is padded prior to pooling such that the encoder output matches the input length |z| = |x|. We set k to 5 in all experiments as (Ranzato et al., 2015). 3.2 Convolutional Encoder A straightforward extension of pooling is to learn the kernel in a convolutional neural network (CNN). The encoder output zj contains information about a fixed-sized context depending on the kernel width k but the desired context width may vary. This can 124 be addressed by stacking several layers of convolutions followed by non-linearities: additional layers increase the total context size while non-linearities can modulate the effective size of the context as needed. For instance, stacking 5 convolutions with kernel width k = 3 results in an input field of 11 words, i.e., each output depends on 11 input words, and the non-linearities allow the encoder to exploit the full input field, or to concentrate on fewer words as needed. To ease learning for deep encoders, we add residual connections from the input of each convolution to the output and then apply the non-linear activation function to the output (tanh; He et al., 2015); the non-linearities are therefore not ’bypassed’. Multi-layer CNNs are constructed by stacking several blocks on top of each other. The CNNs do not contain pooling layers which are commonly used for down-sampling, i.e., the full source sequence length will be retained after the network has been applied. Similar to the pooling model, the convolutional encoder uses position embeddings. The final encoder consists of two stacked convolutional networks (Figure 1): CNN-a produces the encoder output zj to compute the attention scores ai, while the conditional input ci to the decoder is computed by summing the outputs of CNN-c, zj = CNN-a(e)j, ci = m X j=1 aij CNN-c(e)j. In practice, we found that two different CNNs resulted in better perplexity as well as BLEU compared to using a single one (§5.3). We also found this to perform better than directly summing the ei without transformation as for the pooling model. 3.3 Related Work There are several past attempts to use convolutional encoders for neural machine translation, however, to our knowledge none of them were able to match the performance of recurrent encoders. (Kalchbrenner and Blunsom, 2013) introduce a convolutional sentence encoder in which a multi-layer CNN generates a fixed sized embedding for a source sentence, or an n-gram representation followed by transposed convolutions for directly generating a per-token decoder input. The latter requires the length of the translation prior to generation and both models were evaluated by rescoring the output of an existing translation system. (Cho et al., 2014a) propose a gated recursive CNN which is repeatedly applied until a fixed-size representation is obtained but the recurrent encoder achieves higher accuracy. In follow-up work, the authors improved the model via a soft-attention mechanism but did not reconsider convolutional encoder models (Bahdanau et al., 2015). Concurrently to our work, (Kalchbrenner et al., 2016) have introduced convolutional translation models without an explicit attention mechanism but their approach does not yet result in state-ofthe-art accuracy. (Lamb and Xie, 2016) also proposed a multi-layer CNN to generate a fixed-size encoder representation but their work lacks quantitative evaluation in terms of BLEU. Meng et al. (2015) and (Tu et al., 2015) applied convolutional models to score phrase-pairs of traditional phrasebased and dependency-based translation models. Convolutional architectures have also been successful in language modeling but so far failed to outperform LSTMs (Pham et al., 2016). 4 Experimental Setup 4.1 Datasets We evaluate different encoders and ablate architectural choices on a small dataset from the GermanEnglish machine translation track of IWSLT 2014 (Cettolo et al., 2014) with a similar setting to (Ranzato et al., 2015). Unless otherwise stated, we restrict training sentences to have no more than 175 words; test sentences are not filtered. This is a higher threshold compared to other publications but ensures proper training of the position embeddings for non-recurrent encoders; the length threshold did not significantly effect recurrent encoders. Length filtering results in 167K sentence pairs and we test on the concatenation of tst2010, tst2011, tst2012, tst2013 and dev2010 comprising 6948 sentence pairs.3 Our final results are on three major WMT tasks: WMT’16 English-Romanian. We use the same data and pre-processing as (Sennrich et al., 2016a) and train on 2.8M sentence pairs.4 Our model is word-based instead of relying on byte-pair encoding (Sennrich et al., 2016b). We evaluate on newstest2016. WMT’15 English-German. We use all available parallel training data, namely Europarl v7, Com3Different to the other datasets, we lowercase the training data and evaluate with case-insensitive BLEU. 4We followed the pre-processing of https: //github.com/rsennrich/wmt16-scripts/ blob/master/sample/preprocess.sh and added the back-translated data from http://data.statmt.org/ rsennrich/wmt16_backtranslations/en-ro. 125 h h LSTM Die Katze schlief ein <p> <p> Die Katze schlief ein <p> <p> the cat fell c c Convolutional Encoder Networks Attention Weights Conditional Input Computation LSTM Decoder Figure 1: Neural machine translation model with single-layer convolutional encoder networks. CNN-a is on the left and CNN-c is at the right. Embedding layers are not shown. mon Crawl and News Commentary v10 and apply the standard Moses tokenization to obtain 3.9M sentence pairs (Koehn et al., 2007). We report results on newstest2015. WMT’14 English-French. We use a commonly used subset of 12M sentence pairs (Schwenk, 2014), and remove sentences longer than 150 words. This results in 10.7M sentence-pairs for training. Results are reported on ntst14. A small subset of the training data serves as validation set (5% for IWSLT’14 and 1% for WMT) for early stopping and learning rate annealing (§4.3). For IWSLT’14, we replace words that occur fewer than 3 times with a <unk> symbol, which results in a vocabulary of 24158 English and 35882 German word types. For WMT datasets, we retain 200K source and 80K target words. For English-French only, we set the target vocabulary to 30K types to be comparable with previous work. 4.2 Model parameters We use 512 hidden units for both recurrent encoders and decoders. We reset the decoder hidden states to zero between sentences. For the convolutional encoder, 512 hidden units are used for each layer in CNN-a, while layers in CNN-c contain 256 units each. All embeddings, including the output produced by the decoder before the final linear layer, are of 256 dimensions. On the WMT corpora, we find that we can improve the performance of the bidirectional LSTM models (BiLSTM) by using 512dimensional word embeddings. Model weights are initialized from a uniform distribution within [−0.05, 0.05]. For convolutional layers, we use a uniform distribution of  −kd−0.5, kd−0.5 , where k is the kernel width (we use 3 throughout this work) and d is the input size for the first layer and the number of hidden units for subsequent layers (Collobert et al., 2011b). For CNN-c, we transform the input and output with a linear layer each to match the smaller embedding size. The model parameters were tuned on IWSLT’14 and cross-validated on the larger WMT corpora. 4.3 Optimization Recurrent models are trained with Adam as we found them to benefit from aggressive optimization. We use a step width of 3.125 · 10−4 and early stopping based on validation perplexity (Kingma and Ba, 2014). For non-recurrent encoders, we obtain best results with stochastic gradient descent (SGD) and annealing: we use a learning rate of 0.1 and once the validation perplexity stops improving, we reduce the learning rate by an order of magnitude each epoch until it falls below 10−4. For all models, we use mini-batches of 32 sentences for IWSLT’14 and 64 for WMT. We use truncated back-propagation through time to limit the length of target sequences per mini-batch to 25 words. Gradients are normalized by the mini-batch size. We re-normalize the gradients if their norm exceeds 25 (Pascanu et al., 2013). Gradients of convolutional layers are scaled by sqrt(dim(input))−1 similar to (Collobert et al., 2011b). We use dropout on the embeddings and decoder outputs hi with a rate of 0.2 for IWSLT’14 and 0.1 for WMT (Srivastava et al., 2014). All models are implemented in Torch (Collobert et al., 2011a) and trained on a single GPU. 4.4 Evaluation We report accuracy of single systems by training several identical models with different ran126 dom seeds (5 for IWSLT’14, 3 for WMT) and pick the one with the best validation perplexity for final BLEU evaluation. Translations are generated by a beam search and we normalize log-likelihood scores by sentence length. On IWSLT’14 we use a beam width of 10 and for WMT models we tune beam width and word penalty on a separate test set, that is newsdev2016 for WMT’16 English-Romanian, newstest2014 for WMT’15 English-German and ntst1213 for WMT’14 English-French.5 The word penalty adds a constant factor to log-likelihoods, except for the end-of-sentence token. Prior to scoring the generated translations against the respective references, we perform unknown word replacement based on attention scores (Jean et al., 2015). Unknown words are replaced by looking up the source word with the maximum attention score in a pre-computed dictionary. If the dictionary contains no translation, then we simply copy the source word. Dictionaries were extracted from the aligned training data that was aligned with fast align (Dyer et al., 2013). Each source word is mapped to the target word it is most frequently aligned to. For convolutional encoders with stacked CNN-c layers we noticed for some models that the attention maxima were consistently shifted by one word. We determine this per-model offset on the abovementioned development sets and correct for it. Finally, we compute case-sensitive tokenized BLEU, except for WMT’16 English-Romanian where we use detokenized BLEU to be comparable with Sennrich et al. (2016a).6 5 Results 5.1 Recurrent vs. Non-recurrent Encoders We first compare recurrent and non-recurrent encoders in terms of perplexity and BLEU on IWSLT’14 with and without position embeddings (§3.1) and include a phrase-based system (Koehn et al., 2007). Table 1 shows that a single-layer convolutional model with position embeddings (Convolutional) can outperform both a uni-directional LSTM encoder (LSTM) as well as a bi-directional LSTM encoder (BiLSTM). Next, we increase the depth of the convolutional encoder. We choose a 5Specifically, we select a beam from {5, 10} and a word penalty from {0, −0.5, −1, −1.5} 6https://github.com/moses-smt/ mosesdecoder/blob/617e8c8ed1630fb1d1/ scripts/generic/{multi-bleu.perl, mteval-v13a.pl} System/Encoder BLEU BLEU PPL wrd+pos wrd wrd+pos Phrase-based – 28.4 – LSTM 27.4 27.3 10.8 BiLSTM 29.7 29.8 9.9 Pooling 26.1 19.7 11.0 Convolutional 29.9 20.1 9.1 Deep Convolutional 6/3 30.4 25.2 8.9 Table 1: Accuracy of encoders with position features (wrd+pos) and without (wrd) in terms of BLEU and perplexity (PPL) on IWSLT’14 German to English translation; results include unknown word replacement. Deep Convolutional 6/3 is the only multi-layer configuration, more layers for the LSTMs did not improve accuracy on this dataset. good setting by independently varying the number of layers in CNN-a and CNN-c between 1 and 10 and obtained best validation set perplexity with six layers for CNN-a and three layers for CNN-c. This configuration outperforms BiLSTM by 0.7 BLEU (Deep Convolutional 6/3). We investigate depth in the convolutional encoder more in §5.3. Among recurrent encoders, the BiLSTM is 2.3 BLEU better than the uni-directional version. The simple pooling encoder which does not contain any parameters is only 1.3 BLEU lower than a unidirectional LSTM encoder and 3.6 BLEU lower than BiLSTM. The results without position embeddings (words) show that position information is crucial for convolutional encoders. In particular for shallow models (Pooling and Convolutional), whereas deeper models are less effected. Recurrent encoders do not benefit from explicit position information because this information can be naturally extracted through the sequential computation. When tuning model settings, we generally observe good correlation between perplexity and BLEU. However, for convolutional encoders perplexity gains translate to smaller BLEU improvements compared to recurrent counterparts (Table 1). We observe a similar trend on larger datasets. 5.2 Evaluation on WMT Corpora Next, we evaluate the BiLSTM encoder and the convolutional encoder architecture on three larger tasks and compare against previously published results. On WMT’16 English-Romanian translation we compare to (Sennrich et al., 2016a), the winning single system entry for this language pair. Their model consists of a bi-directional GRU encoder, a GRU decoder and MLP-based attention. 127 WMT’16 English-Romanian Encoder Vocabulary BLEU (Sennrich et al., 2016a) BiGRU BPE 90K 28.1 Single-layer decoder BiLSTM 80K 27.5 Convolutional 80K 27.1 Deep Convolutional 8/4 80K 27.8 WMT’15 English-German Encoder Vocabulary BLEU (Jean et al., 2015) RNNsearch-LV BiGRU 500K 22.4 (Chung et al., 2016) BPE-Char BiGRU Char 500 23.9 (Yang et al., 2016) RNNSearch + UNK replace BiLSTM 50K 24.3 + recurrent attention BiLSTM 50K 25.0 Single-layer decoder BiLSTM 80K 23.5 Deep Convolutional 8/4 80K 23.6 Two-layer decoder Two-layer BiLSTM 80K 24.1 Deep Convolutional 15/5 80K 24.2 WMT’14 English-French (12M) Encoder Vocabulary BLEU (Bahdanau et al., 2015) RNNsearch BiGRU 30K 28.5 (Luong et al., 2015b) Single LSTM 6-layer LSTM 40K 32.7 (Jean et al., 2014) RNNsearch-LV BiGRU 500K 34.6 (Zhou et al., 2016) Deep-Att Deep BiLSTM 30K 35.9 Single-layer decoder BiLSTM 30K 34.3 Deep Convolutional 8/4 30K 34.6 Two-layer decoder 2-layer BiLSTM 30K 35.3 Deep Convolutional 20/5 30K 35.7 Table 2: Accuracy on three WMT tasks, including results published in previous work. For deep convolutional encoders, we include the number of layers in CNN-a and CNN-c, respectively. They use byte pair encoding (BPE) to achieve openvocabulary translation and dropout in all components of the neural network to achieve 28.1 BLEU; we use the same pre-processing but no BPE (§4). The results (Table 2) show that a deep convolutional encoder can perform competitively to the state of the art on this dataset (Sennrich et al., 2016a). Our bi-directional LSTM encoder baseline is 0.6 BLEU lower than the state of the art but uses only 512 hidden units compared to 1024. A singlelayer convolutional encoder with embedding size 256 performs at 27.1 BLEU. Increasing the number of convolutional layers to 8 in CNN-a and 4 in CNN-c achieves 27.8 BLEU which outperforms our baseline and is competitive to the state of the art. On WMT’15 English to German, we compare to a BiLSTM baseline and prior work: (Jean et al., 2015) introduce a large output vocabulary; the decoder of (Chung et al., 2016) operates on the character-level; (Yang et al., 2016) uses LSTMs instead of GRUs and feeds the conditional input to the output layer as well as to the decoder. Our single-layer BiLSTM baseline is competitive to prior work and a two-layer BiLSTM encoder performs 0.6 BLEU better at 24.1 BLEU. Previous work also used multi-layer setups, e.g., (Chung et al., 2016) has two layers both in the encoder and the decoder with 1024 hidden units, and (Yang et al., 2016) use 1000 hidden units per LSTM. We use 512 hidden units for both LSTM and convolutional encoders. Our convolutional model with either 8 or 15 layers in CNN-a outperform the BiLSTM encoder with both a single decoder layer or two decoder layers. Finally, we evaluate on the larger WMT’14 English-French corpus. On this dataset the recurrent architectures benefit from an additional layer both in the encoder and the decoder. For a singlelayer decoder, a deep convolutional encoder outperforms the BiLSTM accuracy by 0.3 BLEU and for a two-layer decoder, our very deep convolutional encoder with up to 20 layers outperforms the BiLSTM by 0.4 BLEU. It has 40% fewer parameters than the BiLSTM due to the smaller embedding sizes. We also outperform several previous systems, including the very deep encoder-decoder model proposed by (Luong et al., 2015a). Our best result is just 0.2 BLEU below (Zhou et al., 2016) who use a very deep LSTM setup with a 9-layer encoder, a 7-layer decoder, shortcut connections and extensive regularization with dropout and L2 regularization. 128 5.3 Convolutional Encoder Architecture Details We next motivate our design of the convolutional encoder (§3.2). We use the smaller IWSLT’14 German-English setup without unknown word replacement to enable fast experimental turn-around. BLEU results are averaged over three training runs initialized with different seeds. Figure 2 shows accuracy for a different number of layers of both CNNs with and without residual connections. Our first observation is that computing the conditional input ci directly over embeddings e (line ”without CNN-c”) is already working well at 28.3 BLEU with a single CNN-a layer and at 29.1 BLEU for CNN-a with 7 layers (Figure 2a). Increasing the number of CNN-c layers is beneficial up to three layers and beyond this we did not observe further improvements. Similarly, increasing the number of layers in CNN-a beyond six does not increase accuracy on this relatively small dataset. In general, choosing two to three times as many layers in CNN-a as in CNN-c is a good rule of thumb. Without residual connections, the model fails to utilize the increase in modeling power from additional layers, and performance drops significantly for deeper encoders (Figure 2b). Our convolutional architecture relies on two sets of networks, CNN-a for attention score computation ai and CNN-c for the conditional input ci to be fed to the decoder. We found that using the same network for both tasks, similar to recurrent encoders, resulted in poor accuracy of 22.9 BLEU. This compares to 28.5 BLEU for separate singlelayer networks, or 28.3 BLEU when aggregating embeddings for ci. Increasing the number of layers in the single network setup did not help. Figure 2(a) suggests that the attention weights (CNN-a) need to integrate information from a wide context which can be done with a deep stack. At the same time, the vectors which are averaged (CNN-c) seem to benefit from a shallower, more local representation closer to the input words. Two stacks are an easy way to achieve these contradicting requirements. In Appendix A we visualize attention scores and find that alignments for CNN encoders are less sharp compared to BiLSTMs, however, this does not affect the effectiveness of unknown word replacement once we adjust for shifted maxima. In Appendix B we investigate whether deep convolutional encoders are required for translating long sentences and observe that even relatively shallow encoders perform well on long sentences. 5.4 Training and Generation Speed For training, we use the fast CuDNN LSTM implementation for layers without attention and experiment on IWSLT’14 with batch size 32. The single-layer BiLSTM model trains at 4300 target words/second, while the 6/3 deep convolutional encoder compares at 6400 words/second on an NVidia Tesla M40 GPU. We do not observe shorter overall training time since SGD converges slower than Adam which we use for BiLSTM models. We measure generation speed on an Intel Haswell CPU clocked at 2.50GHz with a single thread for BLAS operations. We use vocabulary selection which can speed up generation by up to a factor of ten at no cost in accuracy via making the time to compute the final output layer negligible (Mi et al., 2016; L’Hostis et al., 2016). This shifts the focus from the efficiency of the encoder to the efficiency of the decoder. On IWSLT’14 (Table 3a) the convolutional encoder increases the speed of the overall model by a factor of 1.35 compared to the BiLSTM encoder while improving accuracy by 0.7 BLEU. In this setup both encoders models have the same hidden layer and embedding sizes. On the larger WMT’15 English-German task (Table 3b) the convolutional encoder speeds up generation by 2.1 times compared to a two-layer BiLSTM. This corresponds to 231 source words/second with beam size 5. Our best model on this dataset generates 203 words/second but at slightly lower accuracy compared to the full vocabulary setting in Table 2. The recurrent encoder uses larger embeddings than the convolutional encoder which were required for the models to match in accuracy. The smaller embedding size is not the only reason for the speed-up. In Table 3a (a), we compare a Conv 6/3 encoder and a BiLSTM with equal embedding sizes. The convolutional encoder is still 1.34x faster (at 0.7 higher BLEU) although it requires roughly 1.6x as many FLOPs. We believe that this is likely due to better cache locality for convolutional layers on CPUs: an LSTM with fused gates7 requires two big matrix multiplications with different weights as well as additions, multiplications and non-linearities for each source word, while the output of each convolutional layer can be computed as whole with a single matrix multiply. For comparison, the quantized deep LSTM7Our bi-directional LSTM implementation is based on torch rnnlib which uses fused LSTM gates (https://github.com/facebookresearch/ torch-rnnlib/) and which we consider an efficient implementation. 129 28 28.5 29 29.5 30 1 2 3 4 5 6 7 8 9 10 BLEU Number of Layers in CNN-a without CNN-c 1-layer CNN-c 2-layer CNN-c 3-layer CNN-c 4-layer CNN-c (a) With residual connections 28 28.5 29 29.5 30 1 2 3 4 5 6 7 8 9 10 BLEU Number of Layers in CNN-a 1-layer CNN-c, no res. 2-layer CNN-c, no res. 3-layer CNN-c, no res. (b) Without residual connections Figure 2: Effect of encoder depth on IWSLT’14 with and without residual connections. The x-axis varies the number of layers in CNN-a and curves show different CNN-c settings. Encoder Words/s BLEU BiLSTM 139.7 22.4 Deep Conv. 6/3 187.9 23.1 (a) IWSLT’14 German-English generation speed on tst2013 with beam size 10. Encoder Words/s BLEU 2-layer BiLSTM 109.9 23.6 Deep Conv. 8/4 231.1 23.7 Deep Conv. 15/5 203.3 24.0 (b) WMT’15 English-German generation speed on newstest2015 with beam size 5. Table 3: Generation speed in source words per second on a single CPU core using vocabulary selection. based model in (Wu et al., 2016) processes 106.4 words/second for English-French on a CPU with 88 cores and 358.8 words/second on a custom TPU chip. The optimized RNNsearch model and C++ decoder described by (Junczys-Dowmunt et al., 2016) translates 265.3 words/s on a CPU with a similar vocabulary selection technique, computing 16 sentences in parallel, i.e., 16.6 words/s on a single core. 6 Conclusion We introduced a simple encoder model for neural machine translation based on convolutional networks. This approach is more parallelizable than recurrent networks and provides a shorter path to capture long-range dependencies in the source. We find it essential to use source position embeddings as well as different CNNs for attention score computation and conditional input aggregation. Our experiments show that convolutional encoders perform on par or better than baselines based on bi-directional LSTM encoders. In comparison to other recent work, our deep convolutional encoder is competitive to the best published results to date (WMT’16 English-Romanian) which are obtained with significantly more complex models (WMT’14 English-French) or stem from improvements that are orthogonal to our work (WMT’15 English-German). Our architecture also leads to large generation speed improvements: translation models with our convolutional encoder can translate twice as fast as strong baselines with bi-directional recurrent encoders. Future work includes better training to enable faster convergence with the convolutional encoder to better leverage the higher processing speed. Our fast architecture is interesting for character level encoders where the input is significantly longer than for words. Also, we plan to investigate the effectiveness of our architecture on other sequence-tosequence tasks, e.g. summarization, constituency parsing, dialog modeling. 130 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. of ICLR. James Bradbury and Richard Socher. 2016. MetaMind Neural Machine Translation System for WMT 2016. In Proc. of WMT. Mauro Cettolo, Jan Niehues, Sebastian St¨uker, Luisa Bentivogli, and Marcello Federico. 2014. Report on the 11th IWSLT evaluation campaign. In Proc. of IWSLT. Kyunghyun Cho, Bart Van Merri¨enboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014a. On the Properties of Neural Machine Translation: Encoder-decoder Approaches. In Proc. of SSST. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014b. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. In Proc. of EMNLP. Junyoung Chung, Kyunghyun Cho, and Yoshua Bengio. 2016. A Character-level Decoder without Explicit Segmentation for Neural Machine Translation. arXiv preprint arXiv:1603.06147 . Ronan Collobert, Koray Kavukcuoglu, and Clement Farabet. 2011a. Torch7: A Matlab-like Environment for Machine Learning. In BigLearn, NIPS Workshop. http://torch.ch. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011b. Natural Language Processing (almost) from scratch. JMLR 12(Aug):2493–2537. Chris Dyer, Victor Chahuneau, and Noah A Smith. 2013. A Simple, Fast, and Effective Reparameterization of IBM Model 2. Proc. of ACL. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep Residual Learning for Image Recognition. In Proc. of CVPR. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735– 1780. S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2014. On Using Very Large Target Vocabulary for Neural Machine Translation. arXiv preprint arXiv:1412.2007v2 . S´ebastien Jean, Orhan Firat, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. Montreal Neural Machine Translation systems for WMT15. In Proc. of WMT. pages 134–140. Marcin Junczys-Dowmunt, Tomasz Dwojak, and Hieu Hoang. 2016. Is Neural Machine Translation Ready for Deployment? A Case Study on 30 Translation Directions. arXiv preprint arXiv:1610.01108 . Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent Continuous Translation Models. In Proc. of EMNLP. Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Koray Kavukcuoglu. 2016. Neural Machine Translation in Linear Time. arXiv . Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. Proc. of ICLR . Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proc. of ACL. Andrew Lamb and Michael Xie. 2016. Convolutional Encoders for Neural Machine Translation. https://cs224d.stanford.edu/ reports/LambAndrew.pdf. Accessed: 201010-31. Gurvan L’Hostis, David Grangier, and Michael Auli. 2016. Vocabulary Selection Strategies for Neural Machine Translation. arXiv preprint arXiv:1610.00072 . Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015a. Effective approaches to attentionbased neural machine translation. In Proc. of EMNLP. Minh-Thang Luong, Ilya Sutskever, Quoc V Le, Oriol Vinyals, and Wojciech Zaremba. 2015b. Addressing the Rare Word Problem in Neural Machine Translation. In Proc. of ACL. Fandong Meng, Zhengdong Lu, Mingxuan Wang, Hang Li, Wenbin Jiang, and Qun Liu. 2015. Encoding Source Language with Convolutional Neural Network for Machine Translation. In Proc. of ACL. Haitao Mi, Zhiguo Wang, and Abe Ittycheriah. 2016. Vocabulary Manipulation for Neural Machine Translation. arXiv preprint arXiv:1605.03209 . Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the Difficulty of Training Recurrent Neural Networks. ICML (3) 28:1310–1318. Ngoc-Quan Pham, Germn Kruszewski, and Gemma Boleda. 2016. Convolutional Neural Network Language Models. In Proc. of EMNLP. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level Training with Recurrent Neural Networks. In Proc. of ICLR. Holger Schwenk. 2014. http://www-lium. univ-lemans.fr/˜schwenk/cslm_joint_ paper/. Accessed: 2016-10-15. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Edinburgh neural machine translation systems for wmt 16. 131 Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural Machine Translation of Rare Words with Subword Units. In Proc. of ACL. Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent Neural Networks from overfitting. JMLR 15:1929–1958. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, and Arthur Szlam. 2015. End-to-end Memory Networks. In Proc. of NIPS. pages 2440–2448. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to Sequence Learning with Neural Networks. In Proc. of NIPS. pages 3104–3112. Zhaopeng Tu, Baotian Hu, Zhengdong Lu, and Hang Li. 2015. Context-dependent Translation selection using Convolutional Neural Network. In Proc. of ACLIJCNLP. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. arXiv preprint arXiv:1609.08144 . Zichao Yang, Zhiting Hu, Yuntian Deng, Chris Dyer, and Alex Smola. 2016. Neural Machine Translation with Recurrent Attention Modeling. arXiv preprint arXiv:1607.05108 . Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. 2016. Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation. arXiv preprint arXiv:1606.04199 . 132 A Alignment Visualization In Figure 4 and Figure 5, we plot attention scores for a sample WMT’15 English-German and WMT’14 English-French translation with BiLSTM and deep convolutional encoders. The translation is on the x-axis and the source sentence on the y-axis. The attention scores of the BiLSTM output are sharp but do not necessarily represent a correct alignment. For CNN encoders the scores are less focused but still indicate an approximate source location, e.g., in Figure 4b, when moving the clause ”over 1,000 people were taken hostage” to the back of the translation. For some models, attention maxima are consistently shifted by one token as both in Figure 4b and Figure 5b. Interestingly, convolutional encoders tend to focus on the last token (Figure 4b) or both the first and last tokens (Figure 5b). Motivated by the hypothesis that the this may be due to the decoder depending on the length of the source sentence (which it cannot determine without position embeddings), we explicitly provided a distributed representation of the input length to the decoder and attention module. However, this did not cause a change in attention patterns nor did it improve translation accuracy. B Performance by Sentence Length 20 21 22 23 24 25 26 27 28 29 1-7 7-9 9-11 11-13 13-15 15-17 17-19 19-21 21-23 23-26 26-28 28-31 31-35 35-43 43-85 BLEU Range of Sentence Lengths 2-layer BiLSTM Deep Conv. 6/3 Deep Conv. 8/4 Deep Conv. 15/5 Figure 3: BLEU per sentence length on WMT’15 English-German newstest2015. The test set is partitioned into 15 equally-sized buckets according to source sentence length. One characteristic of our convolutional encoder architecture is that the context over which outputs are computed depends on the number of layers. With bi-directional RNNs, every encoder output depends on the entire source sentence. In Figure 3, we evaluate whether limited context affects the translation quality on longer sentences of WMT’15 English-German which often requires moving verbs over long distances. We sort the newstest2015 test set by source length, partition it into 15 equallysized buckets, and compare the BLEU scores of models listed in Table 2 on a per-bucket basis. There is no clear evidence for sub-par translations on sentences that are longer than the observable context per encoder output. We include a small encoder with a 6-layer CNN-c and a 3-layer CNN-a in the comparison which performs worse than a 2layer BiLSTM (23.3 BLEU vs. 24.1). With 6 convolutional layers at kernel width 3, each encoder output contains information of 13 adjacent source words. Looking at the accuracy for sentences with 15 words or more, this relatively shallow CNN is either on par or better than the BiLSTM for 5 out of 10 buckets; the BiLSTM has access to the entire source context. Similar observations can be made for the deeper convolutional encoders. 133 [1] Vor [2] zehn [3] Jahren [4] wurden [5] mehr [6] als [7] 1.000 [8] Menschen [9] von [10] tschetschenischen [11] Kämpfern [12] in [13] einer [14] Schule [15] in [16] Beslan [17] als [18] Geiseln [19] genommen [20] . [21] </s> [1] Ten [2] years [3] ago [4] over [5] 1,000 [6] people [7] were [8] taken [9] hostage [10] by [11] Chechen [12] militants [13] at [14] a [15] school [16] in [17] Beslan [18] , [19] southern [20] Russia [21] </s> 0 0.2 0.4 0.6 0.8 (a) 2-layer BiLSTM encoder. [1] Vor [2] zehn [3] Jahren [4] wurden [5] von [6] tschetschenischen [7] Kämpfern [8] in [9] einer [10] Schule [11] in [12] Beslan [13] , [14] im [15] Süden [16] Russlands [17] , [18] über [19] 1.000 [20] Menschen [21] als [22] Geiseln [23] genommen [24] . [25] </s> [1] Ten [2] years [3] ago [4] over [5] 1,000 [6] people [7] were [8] taken [9] hostage [10] by [11] Chechen [12] militants [13] at [14] a [15] school [16] in [17] Beslan [18] , [19] southern [20] Russia [21] </s> 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 (b) Deep convolutional encoder with 15-layer CNN-a and 5-layer CNN-c. Figure 4: Attention scores for WMT’15 English-German translation for a sentence of newstest2015. 134 [1] La [2] police [3] de [4] Phuket [5] a [6] interrogé [7] les [8] <unk> [9] pendant [10] deux [11] jours [12] avant [13] de [14] faire [15] la [16] fabrication [17] de [18] l' [19] histoire [20] . [21] </s> [1] Phuket [2] police [3] interviewed [4] Bamford [5] for [6] two [7] days [8] before [9] she [10] confessed [11] to [12] fabricating [13] the [14] story [15] . [16] </s> 0 0.2 0.4 0.6 0.8 (a) 2-layer BiLSTM encoder. [1] La [2] police [3] de [4] Phuket [5] a [6] interrogé [7] <unk> [8] pendant [9] deux [10] jours [11] avant [12] d' [13] avoir [14] avoué [15] l' [16] histoire [17] . [18] </s> [1] Phuket [2] police [3] interviewed [4] Bamford [5] for [6] two [7] days [8] before [9] she [10] confessed [11] to [12] fabricating [13] the [14] story [15] . [16] </s> 0 0.1 0.2 0.3 0.4 0.5 0.6 (b) Deep convolutional encoder with 20-layer CNN-a and 5-layer CNN-c. Figure 5: Attention scores for WMT’14 English-French translation for a sentence of ntst14. 135
2017
12
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1308–1319 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1120 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1308–1319 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1120 Chat Detection in an Intelligent Assistant: Combining Task-oriented and Non-task-oriented Spoken Dialogue Systems Satoshi Akasaki∗ The University of Tokyo [email protected] Nobuhiro Kaji Yahoo Japan Corporation [email protected] Abstract Recently emerged intelligent assistants on smartphones and home electronics (e.g., Siri and Alexa) can be seen as novel hybrids of domain-specific taskoriented spoken dialogue systems and open-domain non-task-oriented ones. To realize such hybrid dialogue systems, this paper investigates determining whether or not a user is going to have a chat with the system. To address the lack of benchmark datasets for this task, we construct a new dataset consisting of 15, 160 utterances collected from the real log data of a commercial intelligent assistant (and will release the dataset to facilitate future research activity). In addition, we investigate using tweets and Web search queries for handling open-domain user utterances, which characterize the task of chat detection. Experiments demonstrated that, while simple supervised methods are effective, the use of the tweets and search queries further improves the F1-score from 86.21 to 87.53. 1 Introduction 1.1 Chat detection Conventional studies on spoken dialogue systems (SDS) have investigated either domain-specific task-oriented SDS1 (Williams and Young, 2007) or open-domain non-task-oriented SDS (a.k.a., chatbots or chat-oriented SDS) (Wallace, 2009). The former offers convenience by helping users complete tasks in specific domains, while the latter ∗Work done during internship at Yahoo Japan Corporation. 1They can be classified as single-domain or multi-domain task-oriented SDS. offers entertainment through open-ended chatting (or smalltalk) with users. Although the functionalities offered by the two types of SDS are complementary to each other, little practical effort has been made to combine them. This unfortunately has limited the potential of SDS. This situation is now being changed by the emergence of voice-activated intelligent assistants on smartphones and home electronics (e.g., Siri2 and Alexa3). These intelligent assistants typically perform various tasks (e.g., Web search, weather checking, and alarm setting) while being able to have chats with users. They can be seen as a novel hybrid of multi-domain task-oriented SDS and open-domain non-task-oriented SDS. To realize such hybrid SDS, we have to determine whether or not a user is going to have a chat with the system. For example, if a user says “What is your hobby?” it is considered that she is going to have a chat with the system. On the other hand, if she says “Set an alarm at 8 o’clock,” she is probably trying to operate her smartphone. We refer to this task as chat detection and treat it as a binary classification problem. Chat detection has not been explored enough in past studies. This is primarily because little attempts have been made to develop hybrids of task-oriented and non-task-oriented SDS (see Section 2 for related work). Although task-oriented and non-task-oriented SDS have long research histories, both of them do not require chat detection. Typically, users of task-oriented SDS do not have chats with the systems and users of non-taskoriented SDS always have chats with the systems. 1.2 Summary of this paper In this work, we construct a new dataset for chat detection. As we already discussed, chat detection 2http://www.apple.com/ios/siri 3https://developer.amazon.com/alexa 1308 has not been explored enough, and thus there exist no benchmark datasets available. To address this situation, we collected 15, 160 user utterances from real log data of a commercial intelligent assistant, and recruited crowd workers to annotate those utterances with whether or not the users are going to have chats with the intelligent assistant. The resulting dataset will be released to facilitate future studies. The technical challenge in chat detection is that we have to handle open-ended utterances of intelligent assistant users. Commercial intelligent assistants have a vast amount of users and they talk about a wide variety of topics especially when chatting with the assistants. It consequently becomes labor-intensive to collect a sufficiently large amount of annotated data for training accurate chat detectors. We develop supervised binary classifiers to perform chat detection. We address the open-ended user utterances, which characterize chat detection, by using unlabeled external resources. We specifically utilize tweets (i.e., Twitter posts) and Web search queries to enhance the supervised classifiers. Experimental results demonstrated that, while simple supervised methods are effective, the external resources are able to further improve them. The results demonstrated that the use of the external resources increases over 1 point of F1-score (from 86.21 to 87.53). 2 Related Work 2.1 Previous studies on combining task-oriented and non-task-oriented SDS Task-oriented and non-task-oriented SDS have long been investigated independently, and little attempts have been made to develop hybrids of the two types of SDS. As a consequence, previous studies have not investigated chat detection without only a few exceptions.4 Niculescu and Banchs (2015) explored using non-task-oriented SDS as a back-off mechanism for task-oriented SDS. They, however, did not propose any concrete methods of automatically determining when to switch to non-task-oriented SDS. 4Unfortunately, we cannot discuss little about chat detection in existing commercial intelligent assistants since most of their technical details have not been disclosed. We make the best effort to compensate for it by comparing the proposed methods with our in-house intelligent assistant in the experiment. Lee et al. (2007) proposed an example-based dialogue manager to combine task-oriented and nontask-oriented SDS. In such a framework, however, it is difficult to flexibly utilize state-of-the-art supervised classifiers as a component. Other studies proposed machine-learning-based frameworks for combining multi-domain taskoriented SDS and non-task-oriented SDS (Wang et al., 2014; Sarikaya, 2017). These assume that several components including a chat detector are already available, and explore integrating those components. They discuss little on how to develop each of the components. On the other hand, the focus of this work is to develop one of those components, a chat detector. Although it lies outside the scope of this paper to explore how to exploit chat detection method in a full dialogue system, the chat detection method is considered to serve, for example, as one component within those frameworks. 2.2 Intent and domain determination Chat detection is related to, but different from, intent and domain determination that have been studied in the field of SDS (Guo et al., 2014; Xu and Sarikaya, 2014; Ravuri and Stolcke, 2015; Kim et al., 2016; Zhang and Wang, 2016). Both intent and domain determination have been investigated in domain-specific task-oriented SDS. Intent determination aims to determine the type of information a user is seeking in singledomain task-oriented SDS. For example, in the ATIS dataset, which is collected from an airline travel information service system, the information type includes flight, city, and so on (Tur et al., 2010). On the other hand, domain determination aims to determine which domain is relevant to a given user utterance in multi-domain task-oriented SDS (Xu and Sarikaya, 2014). Note that it is possible that domain determination is followed by intent determination. Unlike intent and domain determination, chat detection targets hybrid systems of multi-domain task-oriented SDS and open-domain non-taskoriented SDS, and aims to determine whether the non-task-oriented component is responsible to a given user utterance or not (i.e., the user is going to have a chat or not). Therefore, the objective of chat detection is different from intent and domain determination. It may be possible to see chat detection as a spe1309 cific problem of domain determination (Sarikaya, 2017). We, nevertheless, discuss it as a different problem because of the uniqueness of the “chat domain.” It greatly differs from ordinary domains in that it plays a role of combining the two different types of SDS that have long been studied independently, rather than combining multiple SDS of the same types. In addition, we discuss the use of external resources, especially tweets, for chat detection. This approach is unique to chat detection and is not considered effective for ordinary domain determination. It is interesting to note that chat detection is not followed by slot-filling unlike intent and domain determination, as far as we use a popular response generator such as seq2seq model (Sutskever et al., 2014) or an information retrieval based approach (Yan et al., 2016). Although joint intent (or domain) determination and slot-filling has been widely studied to improve accuracy (Guo et al., 2014; Zhang and Wang, 2016), the same approach is not feasible in chat detection. 2.3 Intelligent assistant Previous studies on intelligent assistants have not investigated chat detection. Their research topics are centered around those on user behaviors including the prediction of user satisfaction and engagement (Jiang et al., 2015; Kobayashi et al., 2015; Sano et al., 2016; Kiseleva et al., 2016a,b) and gamification (Otani et al., 2016). For example, Jiang et al. (2015) investigated predicting whether users are satisfied with the responses of intelligent assistants by combining diverse features including clicks and utterances. Sano et al. (2016) explored predicting whether users will keep using the intelligent assistants in the future by using long-term usage histories. Some earlier works used the Cortana dataset as a benchmark of domain determination (Guo et al., 2014; Xu and Sarikaya, 2014; Kim et al., 2016) or proposed a development framework for Cortana (Crook et al., 2016). Those studies, however, regarded the intelligent assistant as merely one example of multi-domain task-oriented SDS and did not explore chat detection. 2.4 Non-task-oriented SDS Non-task-oriented SDS have long been studied in the research community. While early studies adopted rule-based methods (Weizenbaum, 1966; Wallace, 2009), statistical approaches have recently gained much popularity (Ritter et al., 2011; Vinyals and Le, 2015). This research direction was pioneered by Ritter et al. (2011), who applied a phrase-based SMT model to the response generation. Later, Vinyals and Le (2015) used the seq2seq model (Sutskever et al., 2014). To date, a number follow-up studies have been made to improve on the response quality (Hasegawa et al., 2013; Shang et al., 2015; Sordoni et al., 2015; Li et al., 2016a,b; Gu et al., 2016; Yan et al., 2016). Those studies assume that users always want to have chats with systems and investigate only methods of generating appropriate responses to given utterances. Chat detection is required for integrating those response generators into intelligent assistants. 2.5 Use of conversational data The recent explosion of conversational data on the Web, especially tweets, have triggered a variety of dialogue studies. Those typically used tweets either for training response generators (c.f., Section 2.4) or for discovering dialogue acts in an unsupervised fashion (Ritter et al., 2010; Higashinaka et al., 2011). This treatment of tweets differs from that in our work. 3 Chat Detection Dataset In this section we explain how we constructed the new benchmark dataset for chat detection. We then analyze the data to provide insights into the actual user behavior. 3.1 Construction procedure We sampled 15, 160 unique utterances5 (i.e., automatic speech recognition results) from the real log data of a commercial intelligent assistant, Yahoo! Voice Assist.6 The log data were collected between Jan. and Aug. 2016. In the log data, some utterances such as “Hello” appear frequently. To construct a dataset containing both high and low frequency utterances, we set frequency thresholds7 to divide the utterances into three groups (high, middle, and low frequency) and then randomly sampled the same number of utterances 5The utterances are all in Japanese. Example utterances given in this paper are English translations. 6https://v-assist.yahoo.co.jp 7We cannot disclose the exact threshold values so as to keep the detailed statistics of the original log data confidential. 1310 Label Example No. of votes CHAT Let’s talk about something. 5 What is your hobby? 7 I don’t have any holidays this month. 5 I’m walking around now. 6 Do you like cats? 5 You are a serious geek. 7 NONCHAT Show me a picture of Mt. Fuji. 6 What’s the highest building in the world? 5 A nice restaurant near here. 7 Wake me up at 9:10. 7 Brighten the screen. 6 Turn off the alarm. 7 Table 1: Example utterances and the numbers of votes. NONCHAT utterances are further divided into information seeking (top) and device control (bottom) to facilitate readers’ understanding. #Votes No. of utterances 4 1701 5 2670 6 4978 7 5811 Table 2: Distribution of the numbers of votes. from each of the three groups. During the data collection, we ensured privacy by manually removing utterances that included the full name of a person or detailed address information. Next, we recruited crowd workers to annotate the 15, 160 utterances with two labels, CHAT and NONCHAT. The workers annotated the CHAT label when users were going to have chats with the intelligent assistant and annotated the NONCHAT label when users were seeking some information (e.g., searching the Web or checking the weather) or were trying to operate the smartphones (e.g., setting alarms or controlling volume). Note that our intelligent assistant works primarily on smartphones and thus the NONCHAT utterances include many operational instructions such as alarm setting. Example utterances are given in Table 1. Seven workers were assigned to each utterance, and the final labels were obtained by majority vote to address the quality issue inherent in crowdsourcing. The last column in Table 1 shows the number of votes that the majority label obtained. For example, five workers provided the CHAT label (and the other two provided the NONCHAT label) to the first utterance “Let’s talk about something.” 3.2 Data analysis The construction process described above yielded a dataset made up of 4, 833 CHAT and 10, 327 NONCHAT utterances. We investigated the annotation agreement among the crowd workers. Table 2 shows the distribution of the numbers of votes that the majority labels obtained. The annotation given by the seven workers agreed perfectly in 5, 811 of the 15, 160 utterances (38%). Also, at least six workers agreed in the majority of cases, 10, 789 (= 4, 978 + 5, 811) utterances (71%). This indicates high agreement among the workers and the reliability of the annotation results. During the data construction, we found that a typical confusing case arises when the utterance can be interpreted as an implicit information request. For example, the utterance “I am hungry” can be seen as the user trying to have a chat with the assistant, but it might be the case that she is looking for a local restaurant. Similar examples include “I have a backache” and so on. One solution in this case might be to ask the user a clarification question (Schl¨oder and Fernandez, 2015). Such an exploration is left for our future research. Additionally, we manually classified the CHAT utterances according to their dialogue acts to figure out how real users have chats with the intelligent assistant (Table 3). The set of dialogue acts was designed by referring to (Meguro et al., 2010). As shown in Table 3, while some of the utterances are boilerplates (e.g., those in the GREETING act) and thus have limited variety, the majority of the utterances exhibit tremendous diversity. We see 1311 Dialogue act (No. of Utter.) Example GREETING (206) Hello. Merry Christmas. SELFDISCLOSURE (1164) I am free today. I have a sore throat. ORDER (716) Please cheer me up. Give me a song! QUESTION (1551) Do you have emotions? Are you angry? INVITATION (130) Let’s play with me! Let’s go to karaoke next time. INFORMATION (214) My cat is acting strange. It snowed a lot. THANKS (126) Thank you. You are cool! CURSE (172) You’re an idiot. You are useless. APOLOGY (9) I’m sorry. I mistook, sorry. INTERJECTION (151) Whoof. Yeah, yeah. MISC (394) May the force be with you. Cock-a-doodle-doo. Table 3: Distribution over dialogue acts and example utterances. a wide variety of topics including private issues (e.g., “I am free today”) and questions to the assistant (e.g., “Are you angry?”). Also, we even see a movie quote (“May the force be with you”) and a rooster crow (“Cock-a-doodle-doo”) in the MISC act. These clearly represent the open-domain nature of the user utterances in intelligent assistants. Interestingly, some users curse at the intelligent assistant probably because it failed to make appropriate responses (see the CURSE act). Although such user behavior would not be observed from paid research participants, we observe a certain amount of curse utterances in the real data. 4 Detection Method We formulate chat detection as a binary classification problem to train supervised classifiers. In this section, we first explain the two types of classifiers explored in this paper, and then investigate the use of external resources for enhancing those classifiers. 4.1 Base classifiers The first classifier utilizes SVM for its popularity and efficiency. It uses character and word ngram (n = 1 and 2) features. It also uses word embedding features (Turian et al., 2010). A skipgram model (Mikolov et al., 2013) is trained on Figure 1: Feature vector representation of the example utterance “Today’s weather.” The upper three parts of the vector represent the features described in Section 4.1 (character n-gram, word ngram, and average of the word embeddings). The three additional features explained in Section 4.2 are added as two real-valued features (Tweet GRU and Query GRU) and one binary feature (Query binary). the entire intelligent assistant log8 to learn word embeddings. The embeddings of the words in the utterance are then averaged to produce additional features. The second classifier uses a convolutional neural network (CNN) because it has recently proven to perform well on text classification problems (Kim, 2014; Johnson and Zhang, 2015a,b). We follow (Kim, 2014) to develop a simple CNN that has a single convolution and max-pooling layer followed by the soft-max layer. We use a rectified linear unit (ReLU) as the non-linear activation function. The same word embeddings as SVM are used for the pre-training. 4.2 Using external resources We next investigate using external resources for enhancing the base classifiers. Thanks to the rapid evolution of the Web in the past decade, a variety of textual data including not only conversational (i.e., chat-like) but also non-conversational ones are abundantly available nowadays. These data offer an effective way of enhancing the base classifiers. We specifically use tweets and Web search queries as conversational and non-conversational text data, respectively. We train character-based9 language models on 8We used the same log data used in Section 3. The detailed statistics is confidential. 9We also trained word-based language models in prelim1312 Figure 2: Architecture of our CNN-based classifier when the input utterance is “Today’s weather.” The output layer of CNN and the three additional features explained in Section 4.2 are concatenated. The resulting vector is fed to the soft-max function. tweets and Web search queries, and use their scores (i.e., the normalized log probabilities of the utterance) as two additional features. Let u = c1, c2, . . . , cm be an utterance made up of m characters. Then, the score scorer(u) of the language model trained on the external resource r ∈{tweet, query} is defined as scorer(u) = 1 m m ∑ t=1 log pr(ct | c1, . . . , ct−1). The GRU language model is adopted for its superior performance (Cho et al., 2014; Chung et al., 2014). Let xt be the embedding of t-th character and ht be the t-th hidden state. GRU computes the hidden state as ht = (1 −zt) ⊙ht−1 + zt ⊙˜ht zt = σ(W(z)zt + U(z)ht−1) ˜ht = tanh(W(h)xt + U(h)(rt ⊙ht−1)) rt = σ(W(r)xt + U(r)ht−1) where ⊙is the element-wise multiplication, σ is the sigmoid and tanh is the hyperbolic tangent. W(z), U(z), W(h), U(h), W(r), and U(r) are weight matrices. The hidden states are fed to the soft-max to predict the next word. We also use a binary feature indicating whether the utterance appears in the Web search query log inary experiments and found that character-based ones perform consistently better. or not. We observe that some NONCHAT utterances are made up of single entities such as location and product names. Such utterances are considered to be seeking information on those entities. We therefore use the query log as an entity dictionary to derive a feature indicating whether the utterance is likely to be a single entity. The resulting three features are incorporated into the SVM-based classifier straightforwardly (Figure 1). For the CNN-based classifier, they are provided as additional inputs to the soft-max layer (Figure 2). 5 Experimental Results We empirically evaluate the proposed methods on the chat detection dataset. 5.1 Experimental settings We performed 10-fold cross validation on the chat detection dataset to train and evaluate the proposed classifiers. In each fold, we used 80%, 10%, and 10% of the data for the training, development, and evaluation, respectively. We used word2vec10 to learn 300 dimensional word embeddings. They were used to induce the additional 300 features for SVM. They were also used as the pre-trained word embeddings for CNN. We used the faster-rnn toolkit11 to train the GRU language models. The size of the embedding and hidden layer was set to 256. Noise contrastive estimation (Gutmann and Hyv¨arinen, 2010) was used to train the soft-max function and the number of noise samples was set to 50. Maximum entropy 4-gram models were also trained to yield a combined model (Mikolov et al., 2011). The language models were trained on 100 millions tweets collected between Apr. and July 2016 and 100 million Web search queries issued between Mar. and Jun. 2016. The tweets were sampled from those received replies to collect only conversational tweets (Ritter et al., 2011). The same Web search queries were used to derive the binary feature. Although it is difficult to release those data, we plan to make the feature values available together with the benchmark dataset. We used liblinear12 to train L2-regularized L2-loss SVM. The hyperparameter c was tuned 10https://code.google.com/archive/p/word2vec 11https://github.com/yandex/faster-rnnlm 12https://www.csie.ntu.edu.tw/˜cjlin/liblinear 1313 Model Acc. P R F1 Majority 68.12 N/A N/A N/A Tweet GRU 72.07 54.54 74.40 62.94 In-house IA 78.31 62.57 79.51 70.03 SVM 90.51 86.42 83.45 84.91 SVM+embed. 91.35 87.62 84.88 86.21 SVM+embed.+tweet-query 92.15 88.61 86.50 87.53 CNN 85.16 83.40 68.12 74.41 CNN+pre-train. 90.84 87.03 83.80 85.36 CNN+pre-train.+tweet-query 91.48 87.78 85.18 86.56 Table 4: Chat detection results. over {2−10, 2−9, . . . , 210}. The CNN was implemented with chainer.13 We tuned the number of feature maps over {100, 150}, and filter region sizes over {{2}, {3}, {1, 2}, {2, 3}, {3, 4}, {1, 2, 3}, {2, 3, 4}}. The mini-batch size was set to 32. The dropout rate was set to 0.5. We used Adam (α = 0.001, β1 = 0.9, β2 = 0.999, and ϵ = 10−8) to perform stochastic gradient descent (Kingma and Ba, 2015). 5.2 Baselines The following baseline methods were implemented for comparison: Majority Utterances are always classified as the majority class, NONCHAT. Tweet GRU Utterances are classified as CHAT if the score of the GRU language model trained on the tweets exceeds a threshold. We used exactly the same GRU language model as the one that was used for deriving the feature. The threshold was calibrated on the development data by maximizing the F1-score of the CHAT class. In-house IA Our in-house intelligent assistant system, which adopts a hybrid of rule-based and example-based approaches. Since we cannot disclose its technical details, the result is presented just for reference. 5.3 Result Table 4 gives the precision, recall, F1-score (for the CHAT class), and overall classification accuracy results. We report only accuracy for Majority baseline. +embed. and +pre-train. represent using the word embedding features for SVM 13http://chainer.org and the pre-trained word embeddings for CNN, respectively. +tweet-query represents using the three features derived from the tweets and Web search query. Table 4 represents that both of the classifiers, SVM and CNN, perform accurately. We see that both +embed. and +pre-train. improve the results. The best performing method, SVM+embed.+tweet-query, achieves 92% accuracy and 87% F1-score, outperforming all of the baselines. CNN performed worse than SVM contrary to results reported by recent studies (Kim, 2014). We think this is because the architecture of our CNN is rather simplistic. It might be possible to improve the CNN-based classifier by adopting more complex network, although it is likely to come at the cost of extra training time. Another reason would be that our SVM classifier uses carefully designed features beyond word 1-grams. Table 4 also represents that the external resources are effective, improving F1-scores almost 1 points in both SVM and CNN. Table 5 illustrates example utterances and their language model scores. We see that the language models trained on the tweets and queries successfully provide the CHAT utterances with high and low scores, respectively. Table 6 shows chat detection results when each of the three features derived from the external resources is added to SVM+embed. The results represent that they are all worse than SVM+embed.+tweet-query and thus it is crucial to combine all of them for achieving the best performance. Table 7 shows examples of feature weights of SVM+embed.+tweet-query. Tweet GRU and query GRU denote the language model score features. The others are word n-gram features. We see that the language model scores have the large 1314 Score (tweet/query) Label Utterance −0.964 −1.427 CHAT Halloween has already finished. −0.957 −1.610 CHAT    Let’s sleep. −1.233 −0.562 NONCHAT Pokemon Go install. −1.837 −0.682 NONCHAT Weekly weather forecast. Table 5: Examples of the language model scores. The first two columns represent the scores provided by the GRU language models trained on the tweets and Web search queries, respectively. The third and fourth columns represent the label and utterance. Feature Acc. P R F1 tweet GRU 91.53 87.62 85.49 86.53 query GRU 91.38 87.55 85.06 86.28 query binary 91.42 87.56 85.21 86.36 Table 6: Effect of the three features derived from the tweets and Web search queries. Feature Weight Feature Weight tweet GRU 1.128 query GRU −0.771 I 0.215 call to −0.217 Sing 0.195 volume −0.196 Table 7: Examples feature weights of SVM+embed+tweet-query. positive and negative weights, respectively. This indicates that effectiveness of the language models. We also see that the first person has a large positive weight, while terms related to device controlling (“call to” and “volume”) have large negative weights. Table 8 represents chat detection results of SVM+embd.+tweet-query across the numbers of votes that the majority label obtained. As expected, we see that all metrics get higher as the number of agreement among the crowd workers becomes larger. In fact, we see as much as 98% accuracy when all seven workers agree. This implies that utterances easy for humans to classify are also easy for the classifiers. 5.4 Training data size We next investigate the effect of the training data size on the classification accuracy. Figure 3 illustrates the learning curve. It represents that the classification accuracy improves almost monotonically as the training data size increases. Although our training data is by no means small, the shape of the learning curve nevertheless suggests that further improvement would be achieved by adding more training data. This im#Votes #Utter. Acc. P R F1 4 1701 66.67 55.41 59.81 57.53 5 2670 87.72 80.46 83.01 81.72 6 4978 96.02 92.73 93.87 93.30 7 5811 98.33 96.73 97.68 97.20 Table 8: Chat detection results across the numbers of votes that the majority label obtained. 12.5 25.0 37.5 50.0 62.5 75.0 87.5 100.0 Training data size (%) 88 89 90 91 92 93 Accuracy (%) SVM+embed. SVM+embed.+tweet+query Figure 3: Learning curve of the proposed methods. The horizontal axis represents what percentage of the training portion is used in each fold of the cross validation. The vertical axis represents the classification accuracy. plies that a very large amount of training data are required for covering open-domain utterances in intelligent assistants. The figure at the same time represents the usefulness of the external resources. We see that SVM+embed.+tweet-query trained on about 25% of the training data is able to achieve comparable accuracy with SVM+embed. trained on the entire training data. This result suggests that the external resources are able to compensate for the scarcity of annotated data. 5.5 Utterance length We finally investigate how the utterance length correlates with the classification accuracy. Fig1315 5 6 7 8 9 10 11 12 13 14 15 Utterance Length 86 88 90 92 94 Accuracy (%) SVM+embed. SVM+embed.+tweet-query Figure 4: Classification accuracy across utterance lengths in the number of characters. ure 4 illustrates the classification accuracies of SVM+embed. and SVM+embed.+tweet-query for each utterance length in the number of characters. Figure 4 reveals that the difference between the two proposed methods is evident in short utterances (i.e., ≤5). This is because those utterances are too short to contain sufficient information required for classification, and the additional features are helpful. We note that Japanese writing system uses ideograms and thus even five characters is enough to represent a simple sentence. We also see a clear difference in longer utterances (i.e., 15 ≤) as well. We consider those long utterances are difficult to classify because some words in the utterances are irrelevant for the classification and the n-gram and embedding features include those irrelevant ones. On the other hand, we consider that the language model scores are good at capturing stylistic information irrespective of the utterance length. 6 Future Work As discussed in Section 3.2, some user utterances such as “I am hungry” are ambiguous in nature and thus are difficult to handle in the current framework. An important future work is to develop a sophisticated dialogue manager to handle such utterances, for example, by making clarification questions (Schl¨oder and Fernandez, 2015). We manually investigated the dialogue acts in the chat detection dataset (c.f., Section 3.2). It is interesting to automatically determine the dialogue acts to help producing appropriate system responses. Some related studies exist in such a research direction (Meguro et al., 2010). Although we used only text data to perform chat detection, we can also utilize contextual information such as the previous utterances (Xu and Sarikaya, 2014), the acoustic information (Jiang et al., 2015), and the user profile (Sano et al., 2016). It is an interesting research topic to use such contextual information beyond text. It is considered promising to make use of a neural network for integrating such heterogeneous information. An automatic speech recognition (ASR) error is a popular problem in SDS, and previous studies have proposed sophisticated techniques, including re-ranking (Morbini et al., 2012) and POMDP (Williams and Young, 2007), for addressing the ASR errors. Incorporating these techniques into our methods is also an important future work. Although the studies on non-task-oriented SDS have made substantial progress in the past few years, it unfortunately remains difficult for the systems to fluently chat with users (Higashinaka et al., 2015). Further efforts on improving nontask-oriented dialogue systems is an important future work. 7 Conclusion This paper investigated chat detection for combining domain-specific task-oriented SDS and opendomain non-task-oriented SDS. To address the scarcity of benchmark datasets for this task, we constructed a new benchmark dataset from the real log data of a commercial intelligent assistant. In addition, we investigated using the external resources, tweets and Web search queries, to handle open-domain user utterances, which characterize the task of chat detection. The empirical experiment demonstrated that the off-the-shelf supervised methods augmented with the external resources perform accurately, outperforming the baseline approaches. We hope that this study contributes to remove the long-standing boundary between task-oriented and non-task-oriented SDS. To facilitate future research, we are going to release the dataset together with the feature values derived from the tweets and Web search queries.14 Acknowledgments We thank Manabu Sassano, Chikara Hashimoto, Naoki Yoshinaga, and Masashi Toyoda for fruitful discussions and comments. We also thank the anonymous reviewers. 14https://research-lab.yahoo.co.jp/en/ software 1316 References Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder– decoder for statistical machine translation. In Proceedings of EMNLP. pages 1724–1734. http://www.aclweb.org/anthology/D14-1179. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv:1412.3555. Paul Crook, Alex Marin, Vipul Agarwal, Khushboo Aggarwal, Tasos Anastasakos, Ravi Bikkula, Daniel Boies, Asli Celikyilmaz, Senthilkumar Chandramohan, Zhaleh Feizollahi, Roman Holenstein, Minwoo Jeong, Omar Khan, Young-Bum Kim, Elizabeth Krawczyk, Xiaohu Liu, Danko Panic, Vasiliy Radostev, Nikhil Ramesh, Jean-Phillipe Robichaud, Alexandre Rochette, Logan Stromberg, and Ruhi Sarikaya. 2016. Task completion platform: A selfserve multi-domain goal oriented dialogue platform. In Proceedings of NAACL (Demonstrations). pages 47–51. http://www.aclweb.org/anthology/N163010. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of ACL. pages 1631–1640. http://www.aclweb.org/anthology/P16-1154. Daniel (Zhaohan) Guo, Gokhan Tur, Scott Wen tau Yih, and Geoffrey Zweig. 2014. Joint semantic utterance classification and slot filling with recursive neural networks. In Proceedings of IEEE SLT Workshop. Michael Gutmann and Aapo Hyv¨arinen. 2010. Noisecontrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of AISTATS. pages 297–304. Takayuki Hasegawa, Nobuhiro Kaji, Naoki Yoshinaga, and Masashi Toyoda. 2013. Predicting and eliciting addressee’s emotion in online dialogue. In Proceedings of ACL. pages 964–972. http://www.aclweb.org/anthology/P13-1095. Ryuichiro Higashinaka, Kotaro Funakoshi, Masahiro Araki, Hiroshi Tsukahara, Yuka Kobayashi, and Masahiro Mizukami. 2015. Towards taxonomy of errors in chat-oriented dialogue systems. In Proceedings of SIGDIAL. pages 87–95. http://aclweb.org/anthology/W15-4611. Ryuichiro Higashinaka, Noriaki Kawamae, Kugatsu Sadamitsu, Yasuhiro Minami, Toyomi Meguro, Kohji Dohsaka, and Hirohito Inagaki. 2011. Building a conversational model from two-tweets. In Proceedings of ASRU. pages 330–335. Jiepu Jiang, Ahmed Hassan Awadallah, Rosie Jones, Umut Ozertem, Imed Zitouni, Ranjitha Gurunath Kulkarni, and Omar Zia Khan. 2015. Automatic online evaluation of intelligent assistants. In Proceedings of WWW. pages 506–516. Rie Johnson and Tong Zhang. 2015a. Effective use of word order for text categorization with convolutional neural networks. In Proceedings of NAACL. pages 103–112. http://www.aclweb.org/anthology/N15-1011. Rie Johnson and Tong Zhang. 2015b. Semi-supervised convolutional neural networks for text categorization via region embedding. In Advances in NIPS, pages 919–927. Joo-Kyung Kim, Gokhan Tur, Asli Celikyilmaz, Bin Cao, and Ye-Yi Wang. 2016. Intent detection using semantically enriched word embeddings. In Proceedings of IEEE SLT Workshop. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of EMNLP. pages 1746–1751. http://www.aclweb.org/anthology/D14-1181. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of ICLR. Julia Kiseleva, Kyle Williams, Ahmed Hassan Awadallah, Aidan Crook, Imed Zitouni, and Tasos Anastasakos. 2016a. Predicting user satisfaction with intelligent assistants. In Proceedings of SIGIR. pages 45–54. Julia Kiseleva, Kyle Williams, Ahmed Hassan Awadallah, Aidan C. Crook, Imed Zitouni, and Tasos Anastasakos. 2016b. Understanding user satisfaction with intelligent assistants. In Proceedings of SIGCHIIR. pages 121–130. Hayato Kobayashi, Kaori Tanio, and Manabu Sassano. 2015. Effects of game on user engagement with spoken dialogue system. In Proceedings of SIGDIAL. pages 422–426. http://aclweb.org/anthology/W154656. Cheongjae Lee, Sangkeun Jung, Seokhwan Kim, and Gary Geunbae Lee. 2007. Example-based dialog modeling for practical multi-domain dialog system. Speech Communication 51(5):466–484. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In Proceedings of NAACL. pages 110–119. http://www.aclweb.org/anthology/N16-1014. Jiwei Li, Michel Galley, Chris Brockett, Georgios Spithourakis, Jianfeng Gao, and Bill Dolan. 2016b. A persona-based neural conversation model. In Proceedings of ACL. pages 994–1003. http://www.aclweb.org/anthology/P16-1094. 1317 Toyomi Meguro, Ryuichiro Higashinaka, Yasuhiro Minami, and Kohji Dohsaka. 2010. Controlling listening-oriented dialogue using partially observable markov decision processes. In Proceedings of Coling. pages 761–769. http://www.aclweb.org/anthology/C10-1086. Tomas Mikolov, Anoop Deoras, Daniel Povey, Lukas Burget, and Jan Cernocky. 2011. Strategies for training large scale neural network language models. In Proceedings of ASRU. pages 196–201. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in NIPS. pages 3111–3119. Fabrizio Morbini, Kartik Audhkhasi, Ron Artstein, Maarten Van Segbroeck, Kenji Sagae, Panayiotis Georgiou, David R. Traum, and Shri Narayanan. 2012. A reranking approach for recognition and classification of speech input in conversational dialogue systems. In Proceedings of SLT. pages 49–54. Andreea I. Niculescu and Rafael E. Banchs. 2015. Strategies to cope with errors in human-machine speech interactions: using chatbots as back-off mechanism for task-oriented dialogues. In Proceedings of ERRARE. Naoki Otani, Daisuke Kawahara, Sadao Kurohashi, Nobuhiro Kaji, and Manabu Sassano. 2016. Large-scale acquisition of commonsense knowledge via a quiz game on a dialogue system. In Proceedings of OKBQA. pages 11–20. http://aclweb.org/anthology/W16-4402. Suman Ravuri and Andreas Stolcke. 2015. A comparative study of neural network models for lexical intent classification. In In Proceedings of ASRU. pages 368–374. Alan Ritter, Colin Cherry, and Bill Dolan. 2010. Unsupervised modeling of twitter conversations. In In Proceedings of NAACL. pages 172–180. http://www.aclweb.org/anthology/N10-1020. Alan Ritter, Colin Cherry, and William B. Dolan. 2011. Data-driven response generation in social media. In Proceedings of EMNLP. pages 583–593. http://www.aclweb.org/anthology/D11-1054. Shumpei Sano, Nobuhiro Kaji, and Manabu Sassano. 2016. Prediction of prospective user engagement with intelligent assistants. In Proceedings of ACL. pages 1203–1212. http://www.aclweb.org/anthology/P16-1114. Ruhi Sarikaya. 2017. The technology behind personal digital assistants: An overview of the system architecture and key components. IEEE Signal Processing Magazine 34(1):67–81. Julian J. Schl¨oder and Raquel Fernandez. 2015. Clarifying intentions in dialogue: A corpus study. In Proceedings of the 11th International Conference on Computational Semantics. pages 46–51. http://www.aclweb.org/anthology/W15-0106. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In Proceedings of ACL. pages 1577–1586. http://www.aclweb.org/anthology/P15-1152. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to contextsensitive generation of conversational responses. In Proceedings of NAACL. pages 196–205. http://www.aclweb.org/anthology/N15-1020. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in NIPS, pages 3104–3112. Gokhan Tur, Dilek Hakkani-T¨ur, and Larry Heck. 2010. What is left to be understood in atis? In Proceedings of IEEE SLT Workshop. pages 19–24. Joseph Turian, Lev-Arie Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In Proceedings of ACL. pages 384–394. http://www.aclweb.org/anthology/P10-1040. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. In Proceedings of Deep Learning Workshop. Richard S. Wallace. 2009. The Anatomy of A.L.I.C.E., Springer, pages 181–210. Zhuoran Wang, Hongliang Chen, Guanchun Wang, Hao Tian, Hua Wu, and Haifeng Wang. 2014. Policy learning for domain selection in an extensible multi-domain spoken dialogue system. In Proceedings of EMNLP. pages 57–67. http://www.aclweb.org/anthology/D14-1007. Joseph Weizenbaum. 1966. Eliza–a computer program for the study of natural language communication between man and machine. Communications of the ACM 9(1):36–45. Jason D. Williams and Steve Young. 2007. Partially observable markov decision processes for spoken dialog systems. Computer Speech & Language 21(2):393–422. Puyang Xu and Ruhi Sarikaya. 2014. Contextual domain classification in spoken language understanding systems using recurrent neural network. In Proceedings of ICASSP. pages 136–140. Zhao Yan, Nan Duan, Junwei Bao, Peng Chen, Ming Zhou, Zhoujun Li, and Jianshe Zhou. 2016. Docchat: An information retrieval approach for chatbot engines using unstructured documents. In Proceedings of ACL. pages 516–525. http://www.aclweb.org/anthology/P16-1049. 1318 Xiaodong Zhang and Houfeng Wang. 2016. A joint model of intent determination and slot filling for spoken language understanding. In Proceedings of IJCAI. pages 2993–2999. 1319
2017
120
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1320–1330 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1121 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1320–1330 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1121 A Neural Local Coherence Model Dat Tien Nguyen∗ Informatics Institute University of Amsterdam [email protected] Shafiq Joty Qatar Computing Research Institute HBKU, Qatar Foundation [email protected] Abstract We propose a local coherence model based on a convolutional neural network that operates over the entity grid representation of a text. The model captures long range entity transitions along with entity-specific features without loosing generalization, thanks to the power of distributed representation. We present a pairwise ranking method to train the model in an end-to-end fashion on a task and learn task-specific high level features. Our evaluation on three different coherence assessment tasks demonstrates that our model achieves state of the art results outperforming existing models by a good margin. 1 Introduction and Motivation What distinguishes a coherent text from a random sequence of sentences is that it binds the sentences together to express a meaning as a whole — the interpretation of a sentence usually depends on the meaning of its neighbors. Coherence models that can distinguish a coherent from incoherent texts have a wide range of applications in text generation, summarization, and coherence scoring. Several formal theories of coherence have been proposed (Mann and Thompson, 1988a; Grosz et al., 1995; Asher and Lascarides, 2003), and their principles have inspired development of many existing coherence models (Barzilay and Lapata, 2008; Lin et al., 2011; Li and Hovy, 2014). Among these models, the entity grid (Barzilay and Lapata, 2008), which is based on Centering Theory (Grosz et al., 1995), is arguably the most popular, and has seen a number of improvements over the years. As shown in Figure 1, the entity grid model represents a text by a grid that captures how ∗Both authors contributed equally to this work. grammatical roles of different entities change from sentence to sentence. The grid is then converted into a feature vector containing probabilities of local entity transitions, which enables machine learning models to learn the degree of text coherence. Extensions of this basic grid model incorporate entity-specific features (Elsner and Charniak, 2011), multiple ranks (Feng and Hirst, 2012), and coherence relations (Feng et al., 2014). While the entity grid and its extensions have been successful in many applications, they are limited in several ways. First, they use discrete representation for grammatical roles and features, which prevents the model from considering sufficiently long transitions (Bengio et al., 2003). Second, feature vector computation in existing models is decoupled from the target task, which limits the model’s capacity to learn task-specific features. In this paper, we propose a neural architecture for coherence assessment that can capture long range entity transitions along with arbitrary entityspecific features. Our model obtains generalization through distributed representations of entity transitions and entity features. We also present an end-to-end training method to learn task-specific high level features automatically in our model. We evaluate our approach on three different evaluation tasks: discrimination, insertion, and summary coherence rating, proposed previously for evaluating coherence models (Barzilay and Lapata, 2008; Elsner and Charniak, 2011). Discrimination and insertion involve identifying the right order of the sentences in a text with different levels of difficulty. In the summary coherence rating task, we compare the rankings, given by the model, against human judgments of coherence. The experimental results show that our neural models consistently improve over the nonneural counterparts (i.e., existing entity grid models) yielding absolute gains of about 4% on dis1320 crimination, up to 2.5% on insertion, and more than 4% on summary coherence rating. Furthermore, our model achieves state of the art results in all these tasks. We have released our source code for research purposes.1 The remainder of this paper is organized as follows. We describe entity grid, its extensions, and its limitations in Section 2. In Section 3, we present our neural model. We describe evaluation tasks and results in Sections 4 and 5. We give a brief account of related work in Section 6. Finally, we conclude with future directions in Section 7. 2 Entity Grid and Its Extensions Motivated by Centering Theory (Grosz et al., 1995), Barzilay and Lapata (2008) proposed an entity-based model for representing and assessing text coherence. Their model represents a text by a two-dimensional array called entity grid that captures transitions of discourse entities across sentences. As shown in Figure 1, the rows of the grid correspond to sentences, and the columns correspond to discourse entities appearing in the text. They consider noun phrases (NP) as entities, and employ a coreference resolver to detect mentions of the same entity (e.g., Obama, the president). Each entry Gi,j in the entity grid represents the syntactic role that entity ej plays in sentence si, which can be one of: subject (S), object (O), or other (X). In addition, entities not appearing in a sentence are marked by a special symbol (-). If an entity appears more than once with different grammatical roles in the same sentence, the role with the highest rank (S ≻O ≻X) is considered. To represent the entity grid using a feature vector, Barzilay and Lapata (2008) compute probability for each local entity transition of length k (i.e., {S, O, X, −}k), and represent each grid by a vector of 4k transitions probabilities. To distinguish between transitions of important entities from unimportant ones, they consider the salience of the entities, which they quantify by their occurrence frequency in the document. Assessment of text coherence is then formulated as a ranking problem in an SVM preference ranking framework (Joachims, 2002). Subsequent studies proposed to extend the basic entity grid model. Filippova and Strube (2007) attempted to improve the model by grouping en1https://github.com/datienguyen/cnn_ coherence/ UNIT PRODUCTS RESEARCH COMPANY PARTS CONTROLS INDUSTRY ELECTRONICS TERM CONCERN AEROSPACE EMPLOYEES SERVICES LOS ANGELES EATON s0 O − X X − − − − − − − X − − X s1 − − − − − − − − S − − − − − − s2 − O − − − − X − − − − O O X − s3 − − − − X X − X − O X − − − S s0: Eaton Corp. said it sold its Pacific Sierra Research unit to a company formed by employees of that unit. s1: Terms were not disclosed. s2: Pacific Sierra, based in Los Angeles, has 200 employees and supplies professional services and advanced products to industry. s3: Eaton is an automotive parts, controls and aerospace electronics concern. Figure 1: Entity grid representation (top) for a document (below) from WSJ (id: 0079). tities based on semantic relatedness, but did not get significant improvement. Elsner and Charniak (2011) proposed a number of improvements. They initially show significant improvement by including non-head nouns (i.e., nouns that do not head NPs) as entities in the grid.2 Then, they extend the grid to distinguish between entities of different types by incorporating entity-specific features like named entity, noun class, modifiers, etc. These extensions led to the best results reported so far. The Entity grid and its extensions have been successfully applied to many downstream tasks including coherence rating (Barzilay and Lapata, 2008), essay scoring (Burstein et al., 2010), story generation (McIntyre and Lapata, 2010), and readability assessment (Pitler et al., 2010; Barzilay and Lapata, 2008). They have also been critical components in state-of-the-art sentence ordering models (Soricut and Marcu, 2006; Elsner and Charniak, 2011; Lin et al., 2011). 2.1 Limitations of Entity Grid Models Despite its success, existing entity grid models are limited in several ways. • Existing models use discrete representation for grammatical roles and features, which leads to the so-called curse of dimensionality problem (Bengio et al., 2003). In particular, to model transitions of length k with R different grammatical roles, the basic entity grid model needs to compute Rk tran2They match the nouns to detect coreferent entities. 1321 sition probabilities from a grid. One can imagine that the estimated distribution becomes sparse as k increases. This prevents the model from considering longer transitions – existing models use k ≤3. This problem is exacerbated when we want to include entity-specific features, as the number of parameters grows exponentially with the number of features (Elsner and Charniak, 2011). • Existing models compute feature representations from entity grids in a task-agnostic way. In other words, feature extraction is decoupled from the target downstream tasks. This can limit the model’s capacity to learn task-specific features. Therefore, models that can be trained in an end-toend fashion on different target tasks are desirable. In the following section, we present a neural architecture that allows us to capture long range entity transitions along with arbitrary entity-specific features without loosing generalization. We also present an end-to-end training method to learn task-specific features automatically. 3 The Neural Coherence Model Figure 2 summarizes our neural architecture for modeling local coherence, and how it can be trained in a pairwise fashion. The architecture takes a document as input, and first extracts its entity grid.3 The first layer of the neural network transforms each grammatical role in the grid into a distributed representation, a real-valued vector. The second layer computes high-level features by going over each column (transitions) of the grid. The following layer selects the most important high-level features, which are in turn used for coherence scoring. The features computed at different layers of the network are automatically trained by backpropagation to be relevant to the task. In the following, we elaborate on the layers of the neural network model. (I) Transforming grammatical roles into feature vectors: Grammatical roles are fed to our model as indices taken from a finite vocabulary V. In the simplest scenario, V contains {S, O, X, −}. However, we will see in Section 3.1 that as we include more entity-specific features, V can contain more symbols. The first layer of our network maps each of these indices into a distributed representation Rd by looking up a shared embedding matrix 3For clarification, pairwise input as shown in the figure is required only to train the model. E ∈R|V|×d. We consider E a model parameter to be learned by backpropagation on a given task. We can initialize E randomly or using pretrained vectors trained on a general coherence task. Given an entity grid G with columns representing entity transitions over sentences in a document, the lookup layer extracts a d-dimensional vector for each entry Gi,j from E. More formally, L(G) = D E(G1,1) · · · E(Gi,j) · · · E(Gm,n) E (1) where E(Gi,j) refers to the row in E that corresponds to the grammatical role Gi,j ∈V; m is the total number of sentences and n is the total number of entities in the document. The output L(G) is a tensor in Rm×n×d, which is fed to the next layer of the network as we describe below. (II) Modeling entity transitions: The vectors produced by the lookup layer are combined by subsequent layers of the network to generate a coherence score for the document. To compose higher-level features from the embedding vectors, we make the following modeling assumptions: • Similar to existing entity grid models, we assume there is no spatio-temporal relation between the entities in a document. In other words, columns in a grid are treated independently. • We are interested in modeling entity transitions of arbitrary lengths in a location-invariant way. This means, we aim to compose local patches of entity transitions into higher-level representations, while treating the patches independently of their position in the entity grid. Under these assumptions, the natural choice to tackle this problem is to use a convolutional approach, used previously to solve other NLP tasks (Collobert et al., 2011; Kim, 2014). Convolution layer: A convolution operation involves applying a filter w ∈Rk.d (i.e., a vector of weight parameters) to each entity transition of length k to produce a new abstract feature ht = f(wT Lt:t+k−1,j + bt) (2) where Lt:t+k−1,j denotes the concatenation of k vectors in the lookup layer representing a transition of length k for entity ej in the grid, bt is a bias 1322 Figure 2: Neural architecture for modeling local coherence and the pairwise training method. term, and f is a nonlinear activation function, e.g., ReLU (Nair and Hinton, 2010) in our model. We apply this filter to each possible k-length transitions of different entities in the grid to generate a feature map, hi = [h1, · · · , hm.n+k−1]. We repeat this process N times with N different filters to get N different feature maps (Figure 2). Notice that we use a wide convolution (Kalchbrenner et al., 2014), as opposed to narrow, to ensure that the filters reach entire columns of a grid, including the boundary entities. This is done by performing zero-padding, where out-of-range (i.e., for t < 0 or t > {m, n}) vectors are assumed to be zero. Convolutional filters learn to compose local transition features of a grid into higher-level representations automatically. Since it operates over the distributed representation of grid entries, compared to traditional grid models, the transition length k can be sufficiently large (e.g., 5 −8 in our experiments) to capture long-range transitional dependencies without overfitting on the training data. Moreover, unlike existing grid models that compute transition probabilities from a single document, embedding vectors and convolutional filters are learned from all training documents, which helps the neural framework to obtain better generalization and robustness. Pooling layer: After the convolution, we apply a max-pooling operation to each feature map. m = [µp(h1), · · · , µp(hN)] (3) where µp(hi) refers to the max operation applied to each non-overlapping4 window of p features in the feature map hi. Max-pooling reduces the output dimensionality by a factor of p, and it drives the model to capture the most salient local features from each feature map in the convolutional layer. Coherence scoring: Finally, the max-pooled features are used in the output layer of the network to produce a coherence score y ∈R. y = vT m + b (4) where v is the weight vector and b is a bias term. Why it works: Intuitively, each filter detects a specific transition pattern (e.g., ‘SS-O-X’ for a coherent text), and if this pattern occurs somewhere in the grid, the resulting feature map will have a large value for that particular region and small values for other regions. By applying max pooling on this feature map, the network then discovers that the transition appeared in the grid. 3.1 Incorporating Entity-Specific Features Our model as described above neuralizes the basic entity grid model that considers only entity transitions without distinguishing between types of the entities. However, as Elsner and Charniak (2011) pointed out entity-specific features could be crucial for modeling local coherence. One simple way to incorporate entity-specific features into our model is to attach the feature value (e.g., named entity type) with the grammatical role in the grid. 4We set the stride size to be the same as the pooling length p to get non-overlapping regions. 1323 For example, if an entity ej of type PERSON appears as a subject (S) in sentence si, the grid entry Gi,j can be encoded as PERSON-S. 3.2 Training Our neural model assigns a coherence score to an input document d based on the degree of local coherence observed in its entity grid G. Let y = φ(G|θ) define our model that transforms an input grid G to a coherence score y through a sequence of lookup, convolutional, pooling, and linear projection layers with parameter set θ. The parameter set θ includes the embedding matrix E, the filter matrix W, the weight vector v, and the biases. We use a pairwise ranking approach (Collobert et al., 2011) to learn θ. The training set comprises ordered pairs (di, dj), where document di exhibits a higher degree of coherence than document dj. As we will see in Section 4 such orderings can be obtained automatically or through manual annotation. In training, we seek to find θ that assigns a higher coherence score to di than to dj. We minimize the following ranking objective with respect to θ: J (θ) = max{0, 1 −φ(Gi|θ) + φ(Gj|θ)} (5) where Gi and Gj are the entity grids corresponding to documents di and dj, respectively. Notice that (also shown in Figure 2) the network shares its layers (and hence θ) to obtain φ(Gi|θ) and φ(Gj|θ) from a pair of input grids (Gi, Gj). Barzilay and Lapata (2008) adopted a similar ranking criterion using an SVM preference kernel learner as they argue coherence assessment is best seen as a ranking problem as opposed to classification (coherent vs. incoherent). Also, the ranker gives a scoring function φ that a text generation system can use to compare alternative hypotheses. 4 Evaluation Tasks We evaluate the effectiveness of our coherence models on two different evaluation tasks: sentence ordering and summary coherence rating. 4.1 Sentence Ordering Following Elsner and Charniak (2011), we evaluate our models on two sentence ordering tasks: discrimination and insertion. In the discrimination task (Barzilay and Lapata, 2008), a document is compared to a random perSections # Doc. # Pairs Avg. # Sen. TRAIN 00-13 1,378 26,422 21.5 TEST 14-24 1,053 20,411 22.3 Table 1: Statistics on the WSJ dataset. mutation of its sentences, and the model is considered correct if it scores the original document higher than the permuted one. We use 20 permutations of each document in the test set in accordance with previous work. In the insertion task (Elsner and Charniak, 2011), we evaluate models based on their ability to locate the original position of a sentence previously removed from a document. To measure this, each sentence in the document is removed in turn, and an insertion place is located for which the model gives the highest coherence score to the document. The insertion score is then computed as the average fraction of sentences per document reinserted in their actual position. Discrimination can be easier for longer documents, since a random permutation is likely to be different than the original one. Insertion is a much more difficult task since the candidate documents differ only by the position of one sentence. Dataset: For sentence ordering tasks, we use the Wall Street Journal (WSJ) portion of Penn Treebank, as used by Elsner and Charniak (2008, 2011); Lin et al. (2011); Feng et al. (2014). Table 1 gives basic statistics about the dataset. Following previous works, we use 20 random permutations of each article, and we exclude permutations that match the original document.5 The fourth column (# Pairs) in Table 1 shows the resulting number of (original, permuted) pairs used for training our model and for testing in the discrimination task. Some previous studies (Barzilay and Lapata, 2008; Li and Hovy, 2014) used the AIRPLANES and the EARTHQUAKES corpora, which contain reports on airplane crashes and earthquakes, respectively. Each of these corpora contains 100 articles for training and 100 articles for testing. The average number of sentences per article in these two corpora is 10.4 and 11.5, respectively. We preferred the WSJ corpus for several reasons. First and most importantly, the WSJ corpus is larger than other corpora (see Table 1). A large training set is crucial for learning effective 5Short articles may produce many matches. 1324 deep learning models (Collobert et al., 2011), and a large enough test set is necessary to make a general comment about model performance. Second, as Elsner and Charniak (2011) pointed out, texts in AIRPLANES and EARTHQUAKES are constrained in style, whereas WSJ documents are more like normal informative articles. Third, we could reproduce results on this dataset for the competing systems (e.g., entity grid and its extensions) using the publicly available Brown coherence toolkit.6 4.2 Summary Coherence Rating We further evaluate our models on the summary coherence rating task proposed by Barzilay and Lapata (2008), where we compare rankings given by a model to a pair of summaries against rankings elicited from human judges. Dataset: The summary dataset was extracted from the Document Understanding Conference (DUC’03), which contains 6 clusters of multidocument summaries produced by human experts and 5 automatic summarization systems. Each cluster has 16 summaries of a document with pairwise coherence rankings given by humans judges; see (Barzilay and Lapata, 2008) for details on the annotation method. There are 144 pairs of summaries for training and 80 pairs for testing. 5 Experiments In this section, we present our experiments — the models we compare, their settings, and the results. 5.1 Models Compared We compare our coherence model against a random baseline and several existing models. Random: The Random baseline makes a random decision for the evaluation tasks. Graph-based Model: This is the graph-based unsupervised model proposed by Guinaudeau and Strube (2013). We use the implementation from the cohere7 toolkit (Smith et al., 2016), and run it on the test set with syntactic projection (command line option ‘projection=3’) for graph construction. This setting yielded best scores for this model. Distributed Sentence Model: Li and Hovy (2014) proposed this neural model for measuring 6https://bitbucket.org/melsner/browncoherence 7https://github.com/karins/CoherenceFramework text coherence. The model first encodes each sentence in a document into a fixed-length vector using a recurrent or a recursive neural network. Then it computes the coherence score of the document by aggregating the scores estimated for each window of three sentences in the document. We used the implementation made publicly available by the authors.8 We trained the model on our WSJ corpus with 512, 1024 and 1536 minibatch sizes for a maximum of 25 epochs.9 The model that used minibatch size of 512 and completed 23 epochs achieved the best accuracy on the DEV set. We applied this model to get the scores on the TEST set. Grid-all nouns (E&C): This is the simple extension of the original entity grid model, where all nouns are considered as entities. Elsner and Charniak (2011) report significant gains by considering all nouns as opposed to only head-nouns. Results for this model were obtained by training the baseline entity grid model (command line option ‘-n’) in the Brown coherence toolkit on our dataset. Extended grid (E&C): This represents the extended entity grid model of Elsner and Charniak (2011) that uses 9 entity-specific features; 4 of them were computed from external corpora. This model considers all nouns as entities. For this system, we train the extended grid model (command line option ‘-f’) in the Brown coherence toolkit. Grid-CNN: This is our proposed neural extension of the basic entity grid (all nouns), where we only consider entity transitions as input. Extended Grid-CNN: This corresponds to our neural model that incorporates entity-specific features following the method described in Section 3.1. To keep the model simple, we include only three entity-specific features from (Elsner and Charniak, 2011) that are easy to compute and do not require any external corpus. The features are: (i) named entity type, (ii) salience as determined by occurrence frequency of the entity, and 8http://cs.stanford.edu/ bdlijiwei/code/ 9Our WSJ corpus is about 14 times larger than their ACCIDENT or EARTHQUAKE corpus (1378 vs. 100 training articles), and the articles in our corpus are generally longer than the articles in their corpus (on average 22 vs. 10 sentences per article). Also, the vocabulary in our corpus is much larger than their vocabulary (45462 vs. 4758). Considering these factors and the fact that their Java-based implementation does not support GPU and parallelization, it takes quite long to train and to validate their model on our dataset. In our experiments, depending on the minibatch size, it took approximately 3-5 days to complete only one epoch of training! 1325 Batch Emb. Dropout Filter Win. Pool Grid-CNN 128 100 0.5 150 6 6 Ext. Grid-CNN 32 100 0.5 150 5 6 Table 2: Optimal hyper-parameter setting for our neural models based on development set accuracy. (iii) whether the entity has a proper mention. 5.2 Settings for Neural Models We held out 10% of the training documents to form a development set (DEV) on which we tune the hyper-parameters of our neural models. For discrimination and insertion tasks, the resulting DEV set contains 138 articles and 2,678 pairs after removing the permutations that match the original documents. For the summary rating task, DEV contains 14 pairs of summaries. We implement our models in Theano (Theano Development Team, 2016). We use rectified linear units (ReLU) as activations (f). The embedding matrix is initialized with samples from uniform distribution U(−0.01, 0.01), and the weight matrices are initialized with samples from glorotuniform distribution (Glorot and Bengio, 2010). We train the models by optimizing the pairwise ranking loss in Equation 5 using the gradientbased online learning algorithm RMSprop with parameters (ρ and ϵ) set to the values suggested by Tieleman and Hinton (2012).10 We use up to 25 epochs. To avoid overfitting, we use dropout (Srivastava et al., 2014) of hidden units, and do early stopping by observing accuracy on the DEV set – if the accuracy does not increase for 10 consecutive epochs, we exit with the best model recorded so far. We search for optimal minibatch size in {16, 32, 64, 128}, embedding size in {80, 100, 200}, dropout rate in {0.2, 0.3, 0.5}, filter number in {100, 150, 200, 300}, window size in {2, 3, 4, 5, 6, 7, 8}, and pooling length in {3, 4, 5, 6, 7}. Table 2 shows the optimal hyperparameter setting for our models. The best model on DEV is then used for the final evaluation on the TEST set. We run each experiment five times, each time with a different random seed, and we report the average of the runs to avoid any randomness in results. Statistical significance tests are done using an approximate randomization test based on the accuracy. We used SIGF V.2 (Pad´o, 2006) with 10Other adaptive algorithms, e.g., ADAM (Kingma and Ba, 2014), ADADELTA (Zeiler, 2012) gave similar results. Discr. Ins. Acc F1 Random 50.0 50.0 12.60 Graph-based (G&S) 64.23 65.01 11.93 Dist. sentence (L&H) 77.54 77.54 19.32 Grid-all nouns (E&C) 81.58 81.60 22.13 Extended Grid (E&C) 84.95 84.95 23.28 Grid-CNN 85.57† 85.57† 23.12 Extended Grid-CNN 88.69† 88.69† 25.95† Table 3: Coherence evaluation results on Discrimination and Insertion tasks. † indicates a neural model is significantly superior to its nonneural counterpart with p-value < 0.01. 10,000 iterations. 5.3 Results on Sentence Ordering Table 3 shows the results on discrimination and insertion tasks. The graph-based model gets the lowest scores. This is not surprising considering that this model works in an unsupervised way. The distributed sentence model surprisingly performed poorly on our dataset. Among the existing models, the grid models get the best scores on both tasks. This demonstrates that entity transition, as a method to capture local coherence, is more effective than the sentence representation method. Neuralization of the existing grid models yields significant improvements in most cases. The GridCNN model delivers absolute improvements of about 4% in discrimination and 1% in insertion over the basic grid model. When we compare our Extended Grid-CNN with its non-neural counterpart Extended Grid, we observe similar gains in discrimination and more gains (2.5%) in insertion. Note that the Extended Grid-CNN yields these improvements considering only a subset of the Extended Grid features. This demonstrates the effectiveness of distributed representation and convolutional feature learning method. Compared to the discrimination task, gain in the insertion task is less verbose. There could be two reasons for this. First, as mentioned before, insertion is a harder task than discrimination. Second, our models were not trained specifically on the insertion task. The model that is trained to distinguish an original document from its random permutation may learn features that are not specific enough to distinguish documents when only one sentence differs. In the future, it will be interesting 1326 Acc F1 Random 50.0 50.0 Graph-based (G&S) 80.0 81.5 Grid (B&L) 83.8 Grid-CNN 85.0 85.0 Extended Grid-CNN 86.3 86.3 Pre-trained Grid-CNN 86.3 86.3 Pre-trained Ext. Grid-CNN 87.5 87.5 Table 4: Evaluation results on the Summary Coherence Rating task. to see how the model performs when it is trained on the insertion task directly. 5.4 Results on Summary Coherence Rating Table 4 presents the results on the summary coherence rating task, where we compare our models with the reported results of the graph-based method (Guinaudeau and Strube, 2013) and the initial entity grid model (Barzilay and Lapata, 2008) on the same experimental setting.11 The extended grid model does not use pairwise training, therefore could not be trained on the summarization dataset. Since there are not many training instances, our neural models may not learn well for this task. Therefore, we also present versions of our model, where we use pre-trained models from discrimination task on WSJ corpus (last two rows in the table ). The pre-trained models are then finetuned on the summary rating task. We can observe that even without pre-training our models outperform existing models, and pretraining gives further improvements. Specifically, Pre-trained Grid-CNN gives an improvement of 2.5% over the Grid model, and including entity features pushes the improvement further to 3.7%. 6 Related Work Barzilay and Lapata (2005, 2008) introduced the entity grid representation of discourse to model local coherence that captures the distribution of discourse entities across sentences in a text. They also introduced three tasks to evaluate the performance of coherence models: discrimination, summary coherence rating, and readability. 11Since we do not have access to the output of their systems, we could not do a significance test for this task. A number of extensions of the basic entity grid model has been proposed. Elsner and Charniak (2011) included entity-specific features to distinguish between entities. Feng and Hirst (2012) used the basic grid representation, but improved its learning to rank scheme. Their model learns not only from original document and its permutations but also from ranking preferences among the permutations themselves. Guinaudeau and Strube (2013) convert a standard entity grid into a bipartite graph representing entity occurrences in sentences. To model local entity transition, the method constructs a directed projection graph representing the connection between adjacent sentences. Two sentences have a connected edge if they share at least one entity in common. The coherence score of the document is then computed as the average out-degree of sentence nodes. In addition, there are some approaches that model text coherence based on coreferences and discourse relations. Elsner and Charniak (2008) proposed the discourse-new model by taking into account mentions of all referring expression (i.e., NPs) whether they are first mention (discoursenew) or subsequent (discourse-old) mentions. Given a document, they run a maximum-entropy classifier to detect each NP as a label Lnp ∈ {new, old}. The coherence score of the document is then estimated by Q np:NPs P(Lnp|np). In this work, they also estimate text coherence through pronoun coreference modeling. Lin et al. (2011) assume that a coherent text has certain discourse relation patterns. Instead of modeling entity transitions, they model discourse role transitions between sentences. In a follow up work, Feng et al. (2014) trained the same model but using features derived from deep discourse structures annotated with Rhetorical Structure Theory or RST (Mann and Thompson, 1988b) relations. Louis and Nenkova (2012) introduced a coherence model based on syntactic patterns in text by assuming that sentences in a coherent discourse should share the same structural syntactic patterns. In recent years, there has been a growing interest in neuralizing traditional NLP approaches – language modeling (Bengio et al., 2003), sequence tagging (Collobert et al., 2011), syntactic parsing (Socher et al., 2013), and discourse parsing (Li et al., 2014), etc. Following this tradition, in this paper we propose to neuralize the popular entity grid models. Li and Hovy (2014) also proposed a 1327 neural framework to compute the coherence score of a document by estimating coherence probability for every window of L sentences (in their experiments, L = 3). First, they use a recurrent or a recursive neural network to compute the representation for each sentence in L from its words and their pre-trained embeddings. Then the concatenated vector is passed through a non-linear hidden layer, and finally the output layer decides if the window of sentences is a coherent text or not. Our approach is fundamentally different from their approach; our model operates over entity grids, and we use convolutional architecture to model sufficiently long entity transitions. 7 Conclusion and Future Work We presented a local coherence model based on a convolutional neural network that operates over the distributed representation of entity transitions in the grid representation of a text. Our architecture can model sufficiently long entity transitions, and can incorporate entity-specific features without loosing generalization power. We described a pairwise ranking approach to train the model on a target task and learn task-specific features. Our evaluation on discrimination, insertion and summary coherence rating tasks demonstrates the effectiveness of our approach yielding the best results reported so far on these tasks. In future, we would like to include other sources of information in our model. Our initial plan is to include rhetorical relations, which has been shown to benefit existing grid models (Feng et al., 2014). We would also like to extend our model to other forms of discourse, especially, asynchronous conversations, where participants communicate with each other at different times (e.g., forum, email). Acknowledgments We thank Regina Barzilay and Mirella Lapata for making their summarization data available and Micha Elsner for making his coherence toolkit publicly available. We also thank the three anonymous ACL reviewers and the program chairs for their insightful comments on the paper. References N. Asher and A. Lascarides. 2003. Logics of Conversation, Cambridge University Press. Regina Barzilay and Mirella Lapata. 2005. Modeling local coherence: An entity-based approach. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics, Ann Arbor, Michigan, ACL ’05, pages 141–148. Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: An entity-based approach. Computational Linguistics 34(1):1–34. http://www.aclweb.org/anthology/J08-1001. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. J. Mach. Learn. Res. 3. http://dl.acm.org/citation.cfm?id=944919.944966. Jill Burstein, Joel Tetreault, and Slava Andreyev. 2010. Using entity-based features to model coherence in student essays. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, Los Angeles, California, HLT ’10, pages 681–684. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Research 12:2493–2537. Micha Elsner and Eugene Charniak. 2008. Coreference-inspired coherence modeling. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers. Association for Computational Linguistics, Columbus, Ohio, HLT-Short ’08, pages 41–44. Micha Elsner and Eugene Charniak. 2011. Extending the entity grid with entity-specific features. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers - Volume 2. Association for Computational Linguistics, Portland, Oregon, HLT ’11, pages 125–129. Vanessa Wei Feng and Graeme Hirst. 2012. Extending the entity-based coherence model with multiple ranks. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, Avignon, France, EACL ’12, pages 315–324. Vanessa Wei Feng, Ziheng Lin, and Graeme Hirst. 2014. The impact of deep hierarchical discourse structures in the evaluation of text coherence. In COLING. Katja Filippova and Michael Strube. 2007. Extending the entity-grid coherence model to semantically related entities. In Proceedings of the Eleventh European Workshop on Natural Language Generation. Association for Computational Linguistics, Germany, ENLG ’07, pages 139–142. 1328 Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In JMLR W&CP: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS 2010). Sardinia, Italy, volume 9, pages 249–256. Barbara J. Grosz, Scott Weinstein, and Aravind K. Joshi. 1995. Centering: A framework for modeling the local coherence of discourse. Comput. Linguist. 21(2):203–225. Camille Guinaudeau and Michael Strube. 2013. Graph-based local coherence modeling. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, ACL 2013, 4-9 August 2013, Sofia, Bulgaria, Volume 1: Long Papers. pages 93–103. Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, Edmonton, Alberta, Canada, KDD ’02, pages 133–142. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Baltimore, Maryland, pages 655–665. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Doha, Qatar, pages 1746– 1751. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR abs/1412.6980. http://arxiv.org/abs/1412.6980. Jiwei Li and Eduard Hovy. 2014. A model of coherence based on distributed sentence representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Doha, Qatar, pages 2039–2048. http://www.aclweb.org/anthology/D14-1218. Jiwei Li, Rumeng Li, and Eduard H Hovy. 2014. Recursive deep models for discourse parsing. In EMNLP. pages 2061–2069. Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2011. Automatically evaluating text coherence using discourse relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1. Association for Computational Linguistics, Portland, Oregon, HLT ’11, pages 997–1006. Annie Louis and Ani Nenkova. 2012. A coherence model based on syntactic patterns. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguistics, Stroudsburg, PA, USA, EMNLP-CoNLL ’12, pages 1157–1168. http://dl.acm.org/citation.cfm?id=2390948.2391078. W. Mann and S. Thompson. 1988a. Rhetorical Structure Theory: Toward a Functional Theory of Text Organization. Text 8(3):243–281. William C Mann and Sandra A Thompson. 1988b. Rhetorical structure theory: Toward a functional theory of text organization. Text 8(3):243–281. Neil McIntyre and Mirella Lapata. 2010. Plot induction and evolutionary search for story generation. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Uppsala, Sweden, ACL ’10, pages 1562–1572. Vinod Nair and Geoffrey E. Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In Johannes Frnkranz and Thorsten Joachims, editors, Proceedings of the 27th International Conference on Machine Learning (ICML-10). Omnipress, pages 807–814. http://www.icml2010.org/papers/432.pdf. Sebastian Pad´o. 2006. User’s guide to sigf: Significance testing by approximate randomisation. Emily Pitler, Annie Louis, and Ani Nenkova. 2010. Automatic evaluation of linguistic quality in multidocument summarization. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Uppsala, Sweden, ACL ’10, pages 544– 554. Karin Sim Smith, Wilker Aziz, and Lucia Specia. 2016. Cohere: A toolkit for local coherence. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016). European Language Resources Association (ELRA), Portoroz, Slovenia. Richard Socher, John Bauer, Christopher D. Manning, and Ng Andrew Y. 2013. Parsing with compositional vector grammars. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Sofia, Bulgaria, pages 455–465. http://www.aclweb.org/anthology/P13-1045. Radu Soricut and Daniel Marcu. 2006. Discourse generation using utility-trained coherence models. In Proceedings of the COLING/ACL on Main Conference Poster Sessions. Association for Computational Linguistics, Sydney, Australia, COLING-ACL ’06, pages 803–810. 1329 Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15:1929–1958. Theano Development Team. 2016. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints abs/1605.02688. http://arxiv.org/abs/1605.02688. T. Tieleman and G Hinton. 2012. RMSprop, COURSERA: Neural Networks Matthew D. Zeiler. 2012. ADADELTA: an adaptive learning rate method. CoRR abs/1212.5701. http://arxiv.org/abs/1212.5701. 1330
2017
121
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1331–1341 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1122 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1331–1341 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1122 Data-Driven Broad-Coverage Grammars for Opinionated Natural Language Generation (ONLG) Tomer Cagan School of Computer Science The Interdisciplinary Center Herzeliya, Israel [email protected] Stefan L. Frank Centre for Language Studies Radboud University Nijmegen, The Netherlands [email protected] Reut Tsarfaty Mathematics and Computer Science The Open University of Israel Ra’anana, Israel [email protected] Abstract Opinionated natural language generation (ONLG) is a new, challenging, NLG task in which we aim to automatically generate human-like, subjective, responses to opinionated articles online. We present a data-driven architecture for ONLG that generates subjective responses triggered by users’ agendas, based on automatically acquired wide-coverage generative grammars. We compare three types of grammatical representations that we design for ONLG. The grammars interleave different layers of linguistic information, and are induced from a new, enriched dataset we developed. Our evaluation shows that generation with Relational-Realizational (Tsarfaty and Sima’an, 2008) inspired grammar gets better language model scores than lexicalized grammars `a la Collins (2003), and that the latter gets better humanevaluation scores. We also show that conditioning the generation on topic models makes generated responses more relevant to the document content. 1 Introduction Interaction in social media has become increasingly prevalent nowadays. It fundamentally changes the way businesses and consumers behave (Qualman, 2012), it is instrumental to the success of individuals and businesses (Haenlein and Kaplan, 2009) and it also affects political regimes (Howard et al., 2011; Lamer, 2012). In particular, automatic interaction in natural language in social media is now a common theme, as seen in the rapid popularization of chat applications, chat-bots, and “smart agents” aiming to conduct human-like interactions in natural language. So far, generation of human-like interaction in general has been addressed mostly commercially, where there is a movement towards online response automation (Owyang, 2012; Mah, 2012), and movement away from script-based interaction towards interactive chat bots (Mori et al., 2003; Feng et al., 2006). These efforts provide an automated one-size-fits-all type of interaction, with no particular expression of particular sentiments, topics, or opinions. In academia, work on generating human-like interaction focused so far on generating responses to tweets (Ritter et al., 2011; Hasegawa et al., 2013) or taking turns in short dialogs (Li et al., 2017). However, the architectures assumed in these studies implement sequence to sequence (seq2seq) mappings, which do not take into account topics, sentiments or agendas of the intended responders. Many real-world tasks and applications would benefit from automatic interaction that is generated intendedly based on a certain user profile or agenda. For instance, this can help promoting a political candidate or a social idea in social media, aiding people forming and expressing opinions on specific topics, or, in human-computer interfaces (HCI), making the computer-side generated utterances more meaningful, and ultimately more human-like (assuming that human-like interaction is very often affected by opinion, agenda, style, etc.). In this work we address the opinionated natural language generation (ONLG) task, in which we aim to automatically generate human-like responses to opinionated articles. These responses address particular topics and reflect diverse sentiments towards them, in accordance to predefined user agendas. This is an open-ended and unstructured generation challenge, which is closely tied to the communicative goals of actual human responders. 1331 In previous work we addressed the ONLG challenge using a template-based approach (Cagan et al., 2014). The proposed system generated subjective responses to articles, driven by user agendas. While the evaluation showed promising results in human-likeness and relevance ratings, the template-based system suffers from low output variety, which leads to a learning effect that reduced the perceived human-likeness of generated responses over time. In this work we tackle ONLG from a datadriven perspective, aiming to circumvent such learning effects and repetitive patterns in templatebased generation. Here, we approach generation via automatically inducing broad-coverage generative grammars from a large corpus, and using them for response generation. More specifically, we define a grammar-based generation architecture and design different grammatical representations suitable for the ONLG task. Our grammars interleave different layers of linguistic information — including phrase-structure and dependency labels, lexical items, and levels of sentiment — with the goal of making responses both human-like and relevant. In classical NLG terms, these grammars offer the opportunity for both micro-planning and surface realization (Reiter and Dale, 1997) to unfold together. We implement a generator and a search strategy to carry out the generation, and sort through possible candidates to get the best ones. We evaluate the generated responses and the underlying grammars using automated metrics as well as human evaluation inspired by the Turing test (cf. Cagan et al. (2014) and Li et al. (2017)). Our evaluation shows that while relational realizational (RR) inspired grammars (Tsarfaty and Sima’an, 2008) get good language model scores, simple head-driven lexicalized grammars `a la Collins (2003) get better human rating and are more sensitive to sentiment. Furthermore, we show that incorporating topic models into the grammar-based generation makes the generated responses more relevant to the document content. Finally, our human evaluations results show no learning effect. That is, human raters are unable to discover in the generated responses typical structures that would lead them to consider the responses machine-generated. The remainder of this paper is organized as follows. ln Section 2 we discuss the formal model, and in Section 3 we present the proposed end-toend ONLG architecture. In Section 4 we introduce the grammars we define, and we describe how we use them for generation in Section 5. We follow that with our empirical evaluation in Section 6. In Section 7 we discuss related and future work, and in Section 8 we summarize conclude. 2 The Formal Model Task Definition. Let d be a document containing a single article, and let a be a user agenda as in Cagan et al. (2014). Specifically, a user agenda a can consist of one or more pairs of a topic (represented by a weighted bag-of-words) and an associated sentiment. Let c be an analysis function on documents such that c(d) yields a set of content elements which are also pairings of topics and sentiments. The operation ⊗represents the intersection of the sets of content elements in the document and in the user agenda. We cast ONLG as a prediction function which maps the intersection a ⊗c(d) to a sentence y ∈Σ∗in natural language (in our case, Σ is the vocabulary of English): fresponse(a ⊗c(d)) = y (1) For any non-empty intersection, a response is generated which is related to the topic of the intersection and the sentiments defined towards this topic. The relation between the sentiment in the user agenda and the sentiment reflected in the document is a simple xor function: when the user and the author share a sentiment toward a topic the response is positive, else it is negative. Objective Function. Let G be a formal generative grammar and let T be the set of trees strongly generated by G. In our proposed data-driven, grammar-based, generation architecture, we define fresponse as a function selecting a most probable tree t ∈T derived by G, given the intersection of document content and user agenda. fresponse(a ⊗c(d)) = argmax {w|w=yield(t),t∈T} P(w, t|a ⊗c(d)) (2) Here, w = yield(t) is the sequence of terminals that defines the leaves of the tree, which is then picked as the generated response. Assuming that G is a context-free grammar, we can spell out the probabilistic expression in Equation (2) as a history-based probabilistic model where root(t) selects a starting point for the 1332 Figure 1: The end-to-end, data-driven, grammar-based generation architecture. derivation, der(t) selects the sequence of syntactic rules to be applied, and yield(t) selects the sequence of terminals that forms the response all conditioned on the derivation history. P(w, t|.) =P(root(t)|a ⊗c(d)) (3a) × P(der(t)|root(t), a ⊗c(d)) (3b) × P(yield(t)|root(t), der(t), a ⊗c(d)) (3c) Using standard independence assumptions, Eq. (3) may be re-written as a chain of local decisions, conditioned on selected aspects of the generation history, marked here by the function Φ. P(w, t|.) ≈P(root|Φ(a ⊗c(d)))× (4a) Y rulej∈der(t) P(rulej|Φ(root, a ⊗c(d)))× (4b) Y wi∈yield(t) P(wi|Φ(t, a ⊗c(d))) (4c) In words, the probability of the starting rule (4a) is multiplied with the probability of each of the rules in the derivation (4b) and the probability of each of the terminal nodes in the tree (4c). Each decision may be conditioned on previously generated part(s) of the structure, as well as the intersection of the input document content and user agenda. 3 The Architecture A bird’s-eye view of the architecture we propose is depicted in Figure 1. The process consists of an offline component containing (I) corpus collection, (II) automatic annotation, (III) grammar induction, and (IV) topic-model training. The induced grammar along with a predefined user agenda and the pre-trained topic model are provided as input to the online generation component, which is marked with the dashed box in Figure 1. In (I) corpus collection, we collect a set of documents D with corresponding user comments. The documents in the corpus are used for training a topic model (IV), which is used for topic inference given a new input document d. The collected comments are used for inducing a wide-coverage grammar G for response generation. To realize the goal of ONLG, we aim to jointly model opinion, structure and lexical decisions in our induced grammars. To this end, in (II) automatic annotation we enrich the user comments with annotations that reflect different levels of linguistic information, as detailed in Section 4. In (III) grammar induction we induce a generative grammar G from the annotated corpus, following the common methodology of inducing PCFGs from syntactically annotated corpora (Charniak, 1995; Collins, 2003). We traverse the annotated trees from (III) and use maximum likelihood estimation for learning rule probabilities. No smoothing is done, and in order to filter noise from possibly erroneous parses, we use a frequency cap to define which rules can participate in derivations. We finally define and implement an efficient grammar-based generator, termed here the decoder, which carries out the generation and calculates the objective function in Eq. (4). The algorithm is described in Section 5. 1333 4 The Grammars Base Grammar. A central theme in this research is generating sentences that express a certain sentiment. Our base grammatical representation is inspired by the Stanford sentiment classification parser (Socher et al., 2013) which annotates every non-ternminal node with one of five sentiment classes s ∈{−2, −1, 0, 1, 2}. Formally, each non-terminal in our base grammar includes a constituency category C and a sentiment class label s. The derivation of depth-1 trees with a parent node p and two daughters d1, d2 will thus appear as follows: Cp[sp] →Cd1[sd1] Cd2[sd2] The generative story imposed by this grammar is quite simple: each non-terminal node annotated with a sentiment can generate either a sequence of non-terminal daughters, or a single terminal node. An example of a subtree and its generation sequence is given in Figure 2(Base). Here we see a positive NP which generates two daughters: a neutral DT and a positive NX. The positive NX generates a neutral noun NN and a positive modifying adjective JJ on its left. Such a derivation can yield NP terms such as “the good wife” or “an awesome movie”, but will not generate “some terrible words”. In this grammar, lexical realization is generated conditioned on local pre-terminals only, and independently of the syntactic structure. While the generative story is simple, this grammar can capture complex interactions of sentiment. Such interactions take place in tree structures that include elements that may affect polarity, such as negation, modal verbs and so on (see Socher et al. (2013) and examples therein). In this work we assume a completely data-driven approach wherein such structures are derived based on previously observed sentiment-interactions in sentiment-augmented parses. Lexicalized Grammar. Our base grammar suffers from a clear pitfall: the structure lacks sensitivity to lexical information, and vice versa. This base grammar essentially generates lexical items as an afterthought, conditioned only on the local part-of-speech label and sentiment value. Our first modification of the base grammar is lexicalization in the spirit of Collins (2003). In this representation each non-terminal node is decorated with a phrase-structure category C and a sentiment label s, and it is augmented with a lexical head lh. The lexical head is common to the parent and the left (or right) daughter. A new lexical item, termed modifier lm, is introduced in the right (left) daughter. The resulting depth-1 subtree for a parent p with daughters d1, d2 and a lexical head on the left (without loss of generality) is: Cp[sp, lh] →Cd1[sd1, lh] Cd2[sd2, lm] Lexicalization makes the grammar more useful for generation as lexical choices can be made at any stage of the derivation conditioned on part of the structure. But it has one drawback – it assumes very strong dependence between lexical items that happen to appear as sisters. To overcome this, we define a head-driven generative story that follows the model of Collins (2003), where the mother non-terminal generates first the head node, and then, conditioned on the head it generates a modifying constituent to the left (right) of the head and its corresponding modifying lexical dependent. An example subtree and its associated head-driven generative story is illustrated in Figure 2(Lex). Relational-Realizational Grammar. Generating phrase-structures along with lexical realization can manage form — control how sentences are built. For coherent generation we would like to also control for the function of nodes in the derivation. To this end, we define a grammar and a generative story in the spirit of the RelationalRealizational (RR) grammar of Tsarfaty (2010). In our RR-augmented trees, each non-terminal node includes, on top of the phrase-structure category C, the lexical head l and the sentiment s, a relation label depi which determines its functional role in relation to its parent. The functional component will affect the selection of daughters so that the derived subtree fulfils its function. A depth-1 subtree will thus appear as follows: Ci[si, depi, li] →Cj[sj, depj, li] Ck[sk, depk, lk] The generative story of our RR representation follows the three-phase process defined by Tsarfaty and Sima’an (2008) and Tsarfaty (2010): (i) projection: given a constituent and a sentiment value, generate a set of grammatical relations which define the functions of the daughters to be generated. 1334 (a) (b) (Base) NP[+1] DT[0] The NX[+1] JJ[+1] good NN[0] wife Type LHS RHS SYN NP[+1] → DT[0] NX[+1] SYN NX[+1] → JJ[+1] NN[0] LEX DT[0] → The LEX JJ[+1] → good LEX NN[0] → wife (Lex) NP[+1,wife] DT[0,The] The NX[+1,wife] JJ[+1,good] good NN[0,wife] wife Type LHS RHS HEAD NP[+1,wife] →r NX[+1] MOD NP[+1,wife], NX[+1] →l DT[0] LEX-H NP[+1,wife],NX[+1] → wife LEX NP[+1,wife], NX[+1,wife], DT[0] → the HEAD NX[+1,wife] →r NN[0] MOD NX[+1,wife], NN[0] →l JJ[+1] LEX-H NX[+1,wife], NN[0] → wife LEX NX[+1,wife], NN[0,wife],JJ[+1] → good (RR) NP[+1,root,wife] DT[0,det,The] The NX[+1,hd,wife] JJ[+1,amod,good] good NN[0,hd,wife] wife Type LHS RHS PROJ NP[+1] → {amod,det,hd}@NP[+1] CONF {amod,det,hd}@NP[+1] → <det>@NP[+1], <{amod,hd}>@NP[+1] REAL-C <det>@NP[+1] → DT[0] REAL-C <{amod,hd} >@NP[+1] → NX[+1] REAL-L DT[0,det]@NP[+1,hd,wife] → The REAL-L NX[+1,hd]@NP[+1,hd,wife] → wife PROJ NX[+1] → {amod,hd} @NX[+1] CONF {amod, hd}@NX[+1] → <amod>@NX[+1] , <hd>@NX[+1] REAL-C <amod>@NX[+1] → JJ[+1] REAL-C <hd>@NX[+1] → NN[0] REAL-L JJ[+1,amod]@NX[+1,hd,wife] → good REAL-L NN[+1,hd]@NX[+1,hd,wife] → wife Figure 2: Our grammatical representations, with (a) a sample tree and (b) its generation sequence. A rule of type SYN marks syntactic rules, LEX indicates lexical realization, HEAD, MOD indicate head selection and modifier selection, PROJ,CONF,REAL indicate projection, configuration and realization, respectively. The @ sign indicates aspects in the generation history that the production is conditioned on (Φ in eq. 4). (ii) configuration: given a constituent, sentiment and an unordered set of relations, an ordering for the relations is generated. Unlike the original RR derivations which fully order the set, here we partition the set into two disjoint sets (one of which is a singleton) and order them. This modification ensures that we adhere to binary trees. (iii) realization: For each function-labels’ set we select the daughter’s constituent realizing it. We first generate the constituent and sentiment realizing this function, and then, conditioned on the constituent, sentiment, head and function, we select the lexical dependent. An example tree along with its RR derivation is given in Figure 2(RR). 5 Grammar-Based Generation Our grammar-based generator is a top-down algorithm which starts with a frontier that includes a selected root, and expands the tree continually by substituting non-terminals at the left-hand-side of rules with their daughters on the right hand side, until no more non-terminals exist. This generation procedure yields one sentence for any given root. Due to independence assumptions inherent in the generative processes we defined, there is no guarantee that generated sentences will be completely grammatical, relevant and human-like. To circumvent this, we develop an over-generation algorithm that modifies the basic algorithm to select multiple rules at each generation point, and apply them to uncover several derivation trees, or a forest. 1335 We then use a variation on the beam search algorithm (Reddy, 1977) and devise a methodology to select the k-best scoring trees to be carried on to the next iteration. Specifically, we use a BreadthFirst algorithm for expanding the tree and define a dynamic programming algorithm that takes the score of a derivation tree of n−1 expanded nodes, selects a new rule for the next non-expanded node, and from it, calculates the score of the expanded tree with now n nodes. For comparing the trees, we computed a score according to Eq. (4) for the tree generated so far, and used an average node score to neutralize size difference between trees. To make sure our responses target a particular topic, we propose to condition the selection of lexical items at the root on the topic at the intersection of the document content and user agenda, essentially preferring derivations that yield words related to the input topic distribution. In practice we use topic model scores to estimate the root rule probability, selecting lexical item(s) for generation to start with: ˆP(root(t)|a ⊗c(d)) = ˆP(ROOT →l1l2|a ⊗c(d)) = PN c=1 P2 i=1 tm weight(c) ∗word weight(c, li) (5) where tm weight(c) is the weight of topic c in the topic distribution at the document-agenda intersection, and word weight(c, li) is the weight of the lexical head word li within the word distribution of topic c in the given topic model. The generation process ends when all derivations reach (at most) a pre-defined height (to avoid endless recursions). We then re-rank the generated candidates. The re-ranking is based on a 3-grams language model on the raw yield of the sentence, divided by the length of the sentence to obtain a per-word average and avoid length biases.1 6 Evaluation Goal. We aim to evaluate the grammars’ applicability to the ONLG task. Set in an open domain, it is not trivial to find a “gold-standard” for this task, or even a method to obtain one. Our evaluation thus follows two tracks: an automated assessment track, where we quantitatively assess the responses, and a Turing-like test similar to that of Cagan et al. (2014), where we aim to gauge human-likeness and response relevance. 1Here we use Microsoft’s WebLM API which is part of the Microsoft Oxford Project (Microsoft, 2011). Materials. We collected a new corpus of news articles and corresponding user comments from the NY-Times R⃝web site, using their open Community API. We focus on sports news, which gave us 3,583 news articles and 13,100 user comments, or 55,700 sentences. The articles are then used for training a topic model using the Mallet library (McCallum, 2002). Next, we use the comments in the corpus to induce the grammars. To obtain our Base representation we parse the sentences using the Stanford CoreNLP suite (Manning et al., 2014) which can provide both phrase-structure and sentiment annotation. To obtain our Lexicalized representation we follow the same procedure, this time also using a head-finder which locates the head word for each non-terminal. To obtain the Relational-Realizational representation we followed the algorithm described in Tsarfaty et al. (2011), which, given both a constituency parse and a dependency parse of a sentence, unifies them into a lexicalized and functional phrasestructure. The merging is based on matching spans over words within the sentence.2 Setup. We simulated several scenarios. In each, the system generates sentences with one grammar (G ∈{Base, Lex, RR}) and one scoring scheme (with/without topic model scores). The results of each simulation are 5,000 responses for each variant of the system, consisting of 1,000 sentences for each sentiment class, s ∈{−2, −1, 0, 1, 2}. The same 5000 generated sentences were used in all experiments. We set the generator for trees of maximum depth of 13 which can yield up to 4096 words. In reality, the realization was of much shorter sentences. Examples for generated responses are given in Table 1. 6.1 Comparing Grammars Goal and Metrics. In this experiment we compare and contrast the generation capacity of the grammars, using the following metrics: (i) Fluency measures how grammatical or natural the generated sentences are. We base this measure on a probabilistic language model which gives an indication of how common wordsequences within the sentence are. We express fluency as a Language Model (LM) score which is calculated using the Microsoft Web ML API to get aggregated minus-log probabilities of all 3-grams 2The collected corpus and supplementary annotations are available at www.tomercagan.com/onlg. 1336 in the sentence. The aggregated score is then normalized to give a per-word average in order to cancel any effects of sentence length. (ii) Sentiment Agreement measures whether the inferred sentiment of the response matches the input sentiment parameter used for generation. Specifically, we take the raw yield of the generated tree (a sentence) and run it through the sentiment classifier implemented in Socher et al. (2013), to assign the full sentence one of 5 sentiment classes between −2 and +2. During evaluation, we compare the classified sentiment of the generated sentence is with the sentiment entered as input for the derivation of the sentence, and report the rate of agreement on (a) level (−2.. + 2) and (b) polarity (−/+), which is a more relaxed measure. (iii) The Consiceness/tightness metric aims to evaluate which grammar derives a simpler structure across generations of similar content. Our tightness evaluation is based on the percentage of sentences that were fully realization as terminals within the specific height limit;3 we simply observe how many trees have all leaves as terminal symbols. Intuitively, tighter grammars lead to improved performance and better control over the generated content. It is possible to think of what it captures in terms Occams Razor, preferring the simpler structure to derive comparable outcome. Empirical Results The results of our evaluation are presented in Table 2. With respect to the above metrics, the RR grammar was more compact and natural compared to the lexicalized (LEX) grammar: the per-word LM Score for the RR is −5.6 as compared to −6.5 for LEX. Also, RR has 95.7% complete sentences as compared to only 67.3% for LEX. The LEX grammar was more sensitive to the sentiment input but only slightly, having a 44.6% sentiment agreement and 63.9% sentiment polarity agreement compared to 43.8% and 61.0% for RR grammar. The BASE grammar gave the worst performance for all measures. This provides preliminary evidence in support of incorporating surface realization (lexicalization) into the syntactic generation, rather than filling slots in retrospect. 6.2 Testing Relevance Goal and Metrics Next we aim to evaluate the relevance of the responses to the input document triggering the response. We do so by calculating 3A height of 13 makes a maximum sentence length of 213-1 = 212 = 4096 words. Grammar Sentiment Sentence -2 (and badly should doesn’t.. -1 doesn’t of the yankees.. BASE 0 who is the the game,. 1 is the the united states.. 2 is the best players.. -2 is a rhyme ... mahi mahi, and, I not quote Bunny. -1 Dumpster unpire are the villans. LEX 0 Derogatory big names symbols wider 1 New england has been playful, and infrequent human. 2 That’s a huge award – having get fined! -2 he is very awkward, and to any ridiculous reason. -1 the malfeasance underscores the the widespread belief. RR 0 the programs serve the purposes. 1 McIIroy is a courageous competitor. 2 The urgent service’s a grand idea. Table 1: Responses generated by the system with the different grammars and sentiment levels. Grammar Avg. LM Score Avg. LM Score Complete Sentiment Avg. per word Sentences Agreement Length Mean CI Mean CI (%) / Polarity (%) (words) BASE -79.7 ±0.054 -8.9 ±0.007 20.1 13.3 / 41.8 9.5 LEX -73.7 ±0.016 -6.5 ±0.002 67.3 44.6 / 63.9 12.3 RR -51.8 ±0.011 -5.6 ±0.001 95.7 43.8 / 61.0 9.6 HUMAN -50.1 ±0.000 -5.4 ±0.000 N/A N/A 10.3 Table 2: Mean and 95% Confidence Interval (CI) of language model scores, and measures of compactness and sentiment agreement. The last row, HUMAN refers to the collected human responses. Topic Agreement, a measure that, given a trained topic model, determines how close the topic distribution of the input document and that of the generated response are. We use L2 to calculate the distance between the inferred topic distribution vectors. We focus here on relevance testing for the RR grammar, which gave superior LM scores. In this test we use two generators – RR generator as defined above, and RRTM generator that uses the scoring scheme of Equation (5) to select a start rule deriving the root lexical item. Example sentences of each generator are presented in Table 3. Empirical Results The results of the two generators and their average distance from the topic distribution of the input document are presented in Table 4. Here we see that the generator using topic models for selecting start rules (RRTM) gets topic distribution that is closer to the input document’s topic distribution. The last row, HUMAN, calculates the distance between the topic distributions in the documents and their human responses from the collected corpus. The fact that RRTM outperforms HUMAN is not necessarily surprising, as sentences in human responses are typically from longer paragraphs where some sentences are more generic, used as connectives, interjections, etc. 1337 Grammar Sentiment Sentence -2 they deserve it, but I is fear. -1 the saga is correct. RR 0 the indirect penalty? 1 the job is correct. 2 a salaries excels. -2 Unfortunately, they remind that to participate in baseball. -1 the franchise would he made? RRTM 0 Probably the LONG time . 1 In a good addition, he is a good baseball player. 2 the baseball game sublime. Table 3: Responses generated by the system using emission probabilities and topic models for the start rule selection. Generator Mean CI RR 0.473 ± 0.003 RRTM 0.424 ± 0.003 HUMAN 0.429 ± 0.000 Table 4: Mean and 95% Confidence Interval (CI) for generators with / without topic models scores (RRTM / RR respectively). The last row, HUMAN refers to the collected human responses. 6.3 Human Surveys Goal and Procedure. We evaluate humanlikeness of the generated responses by collecting data via an online survey on Amazon Mechanical Turk. In the survey, participants were asked to judge whether generated sentences were written by a human or a computer. The participants were screened to have a good level of English and reside in the US. Each survey comprised of 50 randomly ordered trials. In each trial the participant was shown a response. The task was to categorize each response on a 7-point scale with labels ‘Certainly human/computer’, ‘Probably human/computer’, ‘Maybe human/computer’ and ‘Unsure’. In 50 trials the participant was exposed to 3-4 sentences for each grammar/sentiment combination. Empirical Results. Average human-likeness ratings (scale 1–7) are presented in Table 5. Here, we see that sentences generated by the lexicalized grammar were perceived as most human-like. This result is in contrast with the automatic evaluation. Such a discrepancy need not be very surprising, as noted by others before (Belz and Reiter, 2006). Cagan et al. (2014) show that there are extra-grammatical factors affecting human-likeness, e.g. world knowledge. We hypothesise that the LEX grammar, which relies heavily on lexical co-occurrences frequencies, is better at replicating world knowledge and idiomatic phrases thus judged as more human. Grammar Mean CI BASE 2.4561 ± 0.004 LEX 4.1681 ± 0.004 RR 3.7278 ± 0.004 Table 5: Mean and 95% Confidence Interval (CI) for human-likeness ratings (scaling 1:low–7:high). Factor b Std. Error z-value P(> |z|) G-LEX 2.90 0.189 15.32 <.00001 G-RR 2.33 0.164 14.20 <.00001 SENT 0.17 0.074 2.32 .020 NWORD -1.60 0.107 -14.95 <.00001 POS 0.21 0.036 5.97 <.00001 G-LEX × SENT -0.18 0.095 -1.91 .056 G-RR × SENT 0.44 0.096 4.53 <.00001 G-LEX × NWORD 1.31 0.117 11.16 <.00001 G-RR × NWORD 1.35 0.138 9.80 <.00001 NWORD × POS 0.10 0.037 2.81 .005 Table 6: Regression analysis of the human survey. In a qualitative inspection on a sample of the results we could verify that the LEX grammar tends to replicate idiomatic sequences while the RR grammar generates novel phrases in a more compositional fashion. Grammaticality is not hindered by it, but apparently human-likeness is. We also run an ordinal mixed-effects regression, which is an appropriate way to analyse discrete rating data. Regression model predictors were Grammar (G), sentiment level (SENT), response length (NWORD), position of response in rating session (POS), and all two-way interactions between these. Quantitative predictors were standardized and non-significant (p > .05) interactions were dropped from the fitted model. Byparticipant random intercepts and slopes of G and SENT were included as random effects. Table 6 displays the fitted model fixed effects, with BASE grammar as the reference level. Consistent with Table 5, we see that LEX and RR score significantly higher on human likeness than BASE. These effects are modulated by sentiment: more positive sentiment makes BASE and RR more human-like (respectively: b = 0.17 and b = 0.44) whereas the LEX grammar becomes less human like (although this effect is only marginally significant: b = −.18). In addition, these effects are also modulated by sentence length in #words – longer sentences make BASE less human-like (b = −1.60) but RR and LEX more human-like (respectively: b = 1.31 and b = 1.35) Importantly, there is a weak but significant positive effect of position (b = 0.21), indicating that human-likeness ratings increase over the course of a rating session. This effect does not depend on the grammar, but is somewhat stronger for longer 1338 sentences (b = 0.10). The position effect contrasts markedly with the decrease of human-likeness ratings that (Cagan et al., 2014) ascribed to a learning effect: there, raters noticed the repetitive structure and took this to be a sign that the utterances were machine generated. The fact that we find no such effect means that our grammars successfully avoided such repetitiveness. 7 Related and Future Work NLG is often cast as a concept-to-text (C2T) challenge, where a structured record is transformed into an utterance expressing its content. C2T is usually addressed using template-based (Becker, 2002) or data-driven (Konstas and Lapata, 2013; Yuan et al., 2015) approaches. In particular, researchers explored data-driven grammar-based approaches (Cahill and van Genabith, 2006), often assuming a custom grammar (Konstas and Lapata, 2013) or a closed-domain approach (DeVault et al., 2008). ONLG in contrast is set in an open domain, and expresses multiple dimensions (grammaticality, sentiment, topic). In the context of social media, generating responses to tweets has been cast as a sequence-tosequence (seq2seq) transduction problem, and has been addressed using statistical machine translation (SMT) methods (Ritter et al., 2011; Hasegawa et al., 2013). In this seq2seq setup, moods and sentiments expressed in the past are replicated or reused, but these responses do not target particular topics and are not driven by a concrete user agenda. An exception is a recent work by Li et al. (2016), exploring a persona-based conversational model, and Xu et al. (2016) who encode loose structured knowledge to condition the generation on. These studies present a stepping stone towards full-fledge neural ONLG architectures with some control over the user characteristics. The surge of interest in neural network generation architectures has spawned the development of seq2seq models based on encoder-decoder setup (Sordoni et al. (2015); Li et al. (2016, 2017) and references therein). These architectures require a very large dataset to train on. In the future we aim to extend our dataset and explore neural network architectures for ONLG that can encode a useragenda, a document, and possibly stylistic choices (Biber and Conrad, 2009; Reiter and Williams, 2010) — in the hope of yielding more diverse, relevant and coherent responses to online content. 8 Conclusion We approached ONLG from a data-driven perspective, aiming to overcome the shortcomings of previous template-based approaches. Our contribution is threefold: (i) we designed three types of broad-coverage grammars appropriate for the task, (ii) we developed a new enriched data-set for inducing the grammars, and (iii) we empirically demonstrated the strengths of the LEX and RR grammars for generation, as well as the overall usefulness of sentiment and topic models incorporated into the syntactic derivation. Our results show that the proposed grammar-based architecture indeed avoids the repetitiveness and learning effects observed in the template-based ONLG. To the best of our knowledge, this is the first data-driven agenda-driven baseline for ONLG, and we believe it can be further improved. Some future avenues for investigation include improving the relevance and human-likeness results by improving the automatic parses quality, acquiring more complex templates via abstract grammars, and experimenting with more sophisticated scoring functions for reranking. With the emergence of deep learning, we further embrace the opportunity to combine the sequence-to-sequence modeling view explored so far with conditioning generation on speakers agendas and user profiles, pushing the envelope of opinionated generation further. Finally, we believe that future work should be evaluated in situ, to examine if, and to what extent, the generated responses participate in and affect the discourse (feed) in social media. References Tilman Becker. 2002. Practical, template-based natural language generation with TAG. In Proceedings of the 6th International Workshop on Tree Adjoining Grammars and Related Frameworks (TAG+6). Anja Belz and Ehud Reiter. 2006. Comparing automatic and human evaluation of NLG systems. In Proceeding of EACL’06. pages 313–320. D. Biber and S. Conrad. 2009. Register, Genre, and Style. Cambridge Textbooks in Linguistics. Cambridge University Press. https://books.google.de/books?id=0HUhombmOJUC. Tomer Cagan, Stefan L. Frank, and Reut Tsarfaty. 2014. Generating subjective responses to opinionated articles in social media: An agenda-driven architecture and a Turing-like test. In Proceedings of the Joint Workshop on Social Dynamics and 1339 Personal Attributes in Social Media. Association for Computational Linguistics, pages 58–67. http://www.aclweb.org/anthology/W/W14/W142708. Aoife Cahill and Josef van Genabith. 2006. Robust PCFG-based generation using automatically acquired LFG approximations. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Stroudsburg, PA, USA, ACL-44, pages 1033–1040. https://doi.org/10.3115/1220175.1220305. Eugene Charniak. 1995. Parsing with context-free grammars and word statistics. Technical report, Providence, RI, USA. Michael Collins. 2003. Head-driven statistical models for natural language parsing. Computational Linguistics 29(4):589–637. https://doi.org/10.1162/089120103322753356. David DeVault, David Traum, and Ron Artstein. 2008. Practical grammar-based NLG from examples. In Proceedings of the Fifth International Natural Language Generation Conference. Association for Computational Linguistics, Stroudsburg, PA, USA, INLG ’08, pages 77–85. http://dl.acm.org/citation.cfm?id=1708322.1708338. Donghui Feng, Erin Shaw, Jihie Kim, and Eduard Hovy. 2006. An intelligent discussion-bot for answering student queries in threaded discussions. In Proceedings of Intelligent User Interface (IUI2006). pages 171–177. Michael Haenlein and Andreas M. Kaplan. 2009. Flagship brand stores within virtual worlds: The impact of virtual store exposure on real-life attitude toward the brand and purchase intent. Recherche et Applications en Marketing (English Edition) 24(3):57–79. https://doi.org/10.1177/205157070902400303. Takayuki Hasegawa, Nobuhiro Kaji, Naoki Yoshinaga, and Masashi Toyoda. 2013. Predicting and eliciting addressee’s emotion in online dialogue. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Sofia, Bulgaria, pages 964–972. http://www.aclweb.org/anthology/P13-1095. Philip N. Howard, Aiden Duffy, Deen Freelon, Muzammil Hussain, Will Mari, and Marwa Mazaid. 2011. Opening closed regimes: What was the role of social media during the Arab spring? Project on Information Technology and Political Islam. http://pitpi.org/index.php/2011/09/11/openingclosed-regimes-what-was-the-role-of-social-mediaduring-the-arab-spring/. Ioannis Konstas and Mirella Lapata. 2013. A global model for concept-to-text generation. Journal of Artificial Intelligence Research 48:305–346. Wiebke Lamer. 2012. Twitter and tyrants: New media and its effects on sovereignty in the Middle East. Arab Media and Society http://www.arabmediasociety.com/?article=798. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A persona-based neural conversation model. CoRR abs/1603.06155. http://arxiv.org/abs/1603.06155. Jiwei Li, Will Monroe, Tianlin Shi, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. CoRR abs/1701.06547. http://arxiv.org/abs/1701.06547. Paul Mah. 2012. Tools to automate your customer service response on social media. Visited August 2013. http://www.itbusinessedge.com/blogs/smbtech/tools-to-automate-your-customer-serviceresponse-on-social-media.html. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations. pages 55–60. http://www.aclweb.org/anthology/P/P14/P14-5010. Andrew Kachites McCallum. 2002. Mallet: A machine learning for language toolkit. http://www.cs. umass.edu/˜mccallum/mallet. Microsoft. 2011. Microsoft cognitive services. https://www.microsoft.com/cognitive-services/enus/web-language-model-api. Kyoshi Mori, Adam Jatowt, and Mitsuru Ishizuka. 2003. Enhancing conversational flexibility in multimodal interactions with embodied lifelike agent. In Proceedings of the 8th International Conference on Intelligent User Interfaces. ACM, New York, NY, USA, IUI ’03, pages 270–272. https://doi.org/10.1145/604045.604096. Jeremiah Owyang. 2012. Brands Start Automating Social Media Responses on Facebook and Twitter. Visited August 2013. http://techcrunch.com/2012/06/07/brands-startautomating-social-media-responses-on-facebookand-twitter/. Erik Qualman. 2012. Socialnomics: How social media transforms the way we live and do business. John Wiley & Sons, Hoboken, NJ, USA, 2nd edition. https://books.google.co.il/books?id=yAqD19i2U0UC. D. Raj Reddy. 1977. Speech understanding systems: summary of results of the five-year research effort at Carnegie-Mellon University. Technical report, Carnegie-Mellon University. Ehud Reiter and Robert Dale. 1997. Building applied natural language generation systems. Natural Language Engineering 3(1):57–87. https://doi.org/10.1017/S1351324997001502. 1340 Ehud Reiter and Sandra Williams. 2010. Generating texts in different styles. In Shlomo Argamon, Kevin Burns, and Shlomo Dubnov, editors, The Structure of Style - Algorithmic Approaches to Understanding Manner and Meaning., Springer, pages 59–75. Alan Ritter, Colin Cherry, and William B. Dolan. 2011. Data-driven response generation in social media. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Stroudsburg, PA, USA, EMNLP ’11, pages 583–593. http://dl.acm.org/citation.cfm?id=2145432.2145500. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Stroudsburg, PA, pages 1631–1642. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pages 196–205. http://www.aclweb.org/anthology/N151020. Reut Tsarfaty. 2010. Relational-Realizational Parsing. Ph.D. thesis, Institute for Logic, Language and Computation, University of Amsterdam. Reut Tsarfaty, Joakim Nivre, and Evelina Andersson. 2011. Evaluating dependency parsing: Robust and Heuristics-Free Cross-Annotation evaluation. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. pages 385–396. http://www.aclweb.org/anthology/D111036. Reut Tsarfaty and Khalil Sima’an. 2008. Relational-realizational parsing. In Proceedings of the 22Nd International Conference on Computational Linguistics. Association for Computational Linguistics, pages 889–896. http://dl.acm.org/citation.cfm?id=1599081.1599193. Zhen Xu, Bingquan Liu, Baoxun Wang, Chengjie Sun, and Xiaolong Wang. 2016. Incorporating loosestructured knowledge into LSTM with recall gate for conversation modeling. CoRR abs/1605.05110. http://arxiv.org/abs/1605.05110. Caixia Yuan, Xiaojie Wang, and Qianhui He. 2015. Proceedings of the 15th European Workshop on Natural Language Generation (ENLG), Association for Computational Linguistics, chapter Response Generation in Dialogue Using a Tailored PCFG Parser, pages 81–85. http://aclweb.org/anthology/W154713. 1341
2017
122
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1342–1352 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1123 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1342–1352 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1123 Learning to Ask: Neural Question Generation for Reading Comprehension Xinya Du1 Junru Shao2 Claire Cardie1 1Department of Computer Science, Cornell University 2Zhiyuan College, Shanghai Jiao Tong University {xdu, cardie}@cs.cornell.edu [email protected] Abstract We study automatic question generation for sentences from text passages in reading comprehension. We introduce an attention-based sequence learning model for the task and investigate the effect of encoding sentence- vs. paragraph-level information. In contrast to all previous work, our model does not rely on hand-crafted rules or a sophisticated NLP pipeline; it is instead trainable end-to-end via sequenceto-sequence learning. Automatic evaluation results show that our system significantly outperforms the state-of-the-art rule-based system. In human evaluations, questions generated by our system are also rated as being more natural (i.e., grammaticality, fluency) and as more difficult to answer (in terms of syntactic and lexical divergence from the original text and reasoning needed to answer). 1 Introduction Question generation (QG) aims to create natural questions from a given a sentence or paragraph. One key application of question generation is in the area of education — to generate questions for reading comprehension materials (Heilman and Smith, 2010). Figure 1, for example, shows three manually generated questions that test a user’s understanding of the associated text passage. Question generation systems can also be deployed as chatbot components (e.g., asking questions to start a conversation or to request feedback (Mostafazadeh et al., 2016)) or, arguably, as a clinical tool for evaluating or improving mental health (Weizenbaum, 1966; Colby et al., 1971). In addition to the above applications, question generation systems can aid in the development of Sentence: Oxygen is used in cellular respiration and released by photosynthesis, which uses the energy of sunlight to produce oxygen from water. Questions: – What life process produces oxygen in the presence of light? photosynthesis – Photosynthesis uses which energy to form oxygen from water? sunlight – From what does photosynthesis get oxygen? water Figure 1: Sample sentence from the second paragraph of the article Oxygen, along with the natural questions and their answers. annotated data sets for natural language processing (NLP) research in reading comprehension and question answering. Indeed the creation of such datasets, e.g., SQuAD (Rajpurkar et al., 2016) and MS MARCO (Nguyen et al., 2016), has spurred research in these areas. For the most part, question generation has been tackled in the past via rule-based approaches (e.g., Mitkov and Ha (2003); Rus et al. (2010). The success of these approaches hinges critically on the existence of well-designed rules for declarative-to-interrogative sentence transformation, typically based on deep linguistic knowledge. To improve over a purely rule-based system, Heilman and Smith (2010) introduced an overgenerate-and-rank approach that generates multiple questions from an input sentence using a rule-based approach and then ranks them using a supervised learning-based ranker. Although the ranking algorithm helps to produce more ac1342 ceptable questions, it relies heavily on a manually crafted feature set, and the questions generated often overlap word for word with the tokens in the input sentence, making them very easy to answer. Vanderwende (2008) point out that learning to ask good questions is an important task in NLP research in its own right, and should consist of more than the syntactic transformation of a declarative sentence. In particular, a natural sounding question often compresses the sentence on which it is based (e.g., question 3 in Figure 1), uses synonyms for terms in the passage (e.g., “form” for “produce” in question 2 and “get” for “produce” in question 3), or refers to entities from preceding sentences or clauses (e.g., the use of “photosynthesis” in question 2). Othertimes, world knowledge is employed to produce a good question (e.g., identifying “photosynthesis” as a “life process” in question 1). In short, constructing natural questions of reasonable difficulty would seem to require an abstractive approach that can produce fluent phrasings that do not exactly match the text from which they were drawn. As a result, and in contrast to all previous work, we propose here to frame the task of question generation as a sequence-to-sequence learning problem that directly maps a sentence from a text passage to a question. Importantly, our approach is fully data-driven in that it requires no manually generated rules. More specifically, inspired by the recent success in neural machine translation (Sutskever et al., 2014; Bahdanau et al., 2015), summarization (Rush et al., 2015; Iyer et al., 2016), and image caption generation (Xu et al., 2015), we tackle question generation using a conditional neural language model with a global attention mechanism (Luong et al., 2015a). We investigate several variations of this model, including one that takes into account paragraph- rather than sentence-level information from the reading passage as well as other variations that determine the importance of pre-trained vs. learned word embeddings. In evaluations on the SQuAD dataset (Rajpurkar et al., 2016) using three automatic evaluation metrics, we find that our system significantly outperforms a collection of strong baselines, including an information retrieval-based system (Robertson and Walker, 1994), a statistical machine translation approach (Koehn et al., 2007), and the overgenerate-and-rank approach of Heilman and Smith (2010). Human evaluations also rated our generated questions as more grammatical, fluent, and challenging (in terms of syntactic divergence from the original reading passage and reasoning needed to answer) than the state-of-theart Heilman and Smith (2010) system. In the sections below we discuss related work (Section 2), specify the task definition (Section 3) and describe our neural sequence learning based models (Section 4). We explain the experimental setup in Section 5. Lastly, we present the evaluation results as well as a detailed analysis. 2 Related Work Reading Comprehension is a challenging task for machines, requiring both understanding of natural language and knowledge of the world (Rajpurkar et al., 2016). Recently many new datasets have been released and in most of these datasets, the questions are generated in a synthetic way. For example, bAbI (Weston et al., 2016) is a fully synthetic dataset featuring 20 different tasks. Hermann et al. (2015) released a corpus of cloze style questions by replacing entities with placeholders in abstractive summaries of CNN/Daily Mail news articles. Chen et al. (2016) claim that the CNN/Daily Mail dataset is easier than previously thought, and their system almost reaches the ceiling performance. Richardson et al. (2013) curated MCTest, in which crowdworker questions are paired with four answer choices. Although MCTest contains challenging natural questions, it is too small for training data-demanding question answering models. Recently, Rajpurkar et al. (2016) released the Stanford Question Answering Dataset1 (SQuAD), which overcomes the aforementioned small size and (semi-)synthetic issues. The questions are posed by crowd workers and are of relatively high quality. We use SQuAD in our work, and similarly, we focus on the generation of natural questions for reading comprehension materials, albeit via automatic means. Question Generation has attracted the attention of the natural language generation (NLG) community in recent years, since the work of Rus et al. (2010). Most work tackles the task with a rule-based approach. Generally, they first transform the input sentence into its syntactic representation, which 1https://stanford-qa.com 1343 they then use to generate an interrogative sentence. A lot of research has focused on first manually constructing question templates, and then applying them to generate questions (Mostow and Chen, 2009; Lindberg et al., 2013; Mazidi and Nielsen, 2014). Labutov et al. (2015) use crowdsourcing to collect a set of templates and then rank the relevant templates for the text of another domain. Generally, the rule-based approaches make use of the syntactic roles of words, but not their semantic roles. Heilman and Smith (2010) introduce an overgenerate-and-rank approach: their system first overgenerates questions and then ranks them. Although they incorporate learning to rank, their system’s performance still depends critically on the manually constructed generating rules. Mostafazadeh et al. (2016) introduce visual question generation task, to explore the deep connection between language and vision. Serban et al. (2016) propose generating simple factoid questions from logic triple (subject, relation, object). Their task tackles mapping from structured representation to natural language text, and their generated questions are consistent in terms of format and diverge much less than ours. To our knowledge, none of the previous works has framed QG for reading comprehension in an end-to-end fashion, and nor have them used deep sequence-to-sequence learning approach to generate questions. 3 Task Definition In this section, we define the question generation task. Given an input sentence x, our goal is to generate a natural question y related to information in the sentence, y can be a sequence of an arbitrary length: [y1, ..., y|y|]. Suppose the length of the input sentence is M, x could then be represented as a sequence of tokens [x1, ..., xM]. The QG task is defined as finding y, such that: y = arg max y P (y|x) (1) where P (y|x) is the conditional log-likelihood of the predicted question sequence y, given the input x. In section 4.1, we will elaborate on the global attention mechanism for modeling P (y|x). 4 Model Our model is partially inspired by the way in which a human would solve the task. To ask a natural question, people usually pay attention to certain parts of the input sentence, as well as associating context information from the paragraph. We model the conditional probability using RNN encoder-decoder architecture (Bahdanau et al., 2015; Cho et al., 2014), and adopt the global attention mechanism (Luong et al., 2015a) to make the model focus on certain elements of the input when generating each word during decoding. Here, we investigate two variations of our models: one that only encodes the sentence and another that encodes both sentence and paragraphlevel information. 4.1 Decoder Similar to Sutskever et al. (2014) and Chopra et al. (2016), we factorize the the conditional in equation 1 into a product of word-level predictions: P (y|x) = |y| Y t=1 P (yt|x, y<t) where probability of each yt is predicted based on all the words that are generated previously (i.e., y<t), and input sentence x. More specifically, P (yt|x, y<t) = softmax (Wstanh (Wt[ht; ct])) (2) with ht being the recurrent neural networks state variable at time step t, and ct being the attentionbased encoding of x at decoding time step t (Section 4.2). Ws and Wt are parameters to be learned. ht = LSTM1 (yt−1, ht−1) (3) here, LSTM is the Long Short-Term Memory (LSTM) network (Hochreiter and Schmidhuber, 1997). It generates the new state ht, given the representation of previously generated word yt−1 (obtained from a word look-up table), and the previous state ht−1. The initialization of the decoder’s hidden state differentiates our basic model and the model that incorporates paragraph-level information. For the basic model, it is initialized by the sentence’s representation s obtained from the sentence encoder (Section 4.2). For our paragraphlevel model, the concatenation of the sentence 1344 encoder’s output s and the paragraph encoder’s output s′ is used as the initialization of decoder hidden state. To be more specific, the architecture of our paragraph-level model is like a “Y”shaped network which encodes both sentenceand paragraph-level information via two RNN branches and uses the concatenated representation for decoding the questions. 4.2 Encoder The attention-based sentence encoder is used in both of our models, while the paragraph encoder is only used in the model that incorporates paragraph-level information. Attention-based sentence encoder: We use a bidirectional LSTM to encode the sentence, −→ bt = −−−−→ LSTM2  xt, −−→ bt−1  ←− bt = ←−−−− LSTM2  xt, ←−− bt+1  where −→ bt is the hidden state at time step t for the forward pass LSTM, ←− bt for the backward pass. To get attention-based encoding of x at decoding time step t, namely, ct, we first get the context dependent token representation by bt = [−→ bt; ←− bt], then we take the weighted average over bt (t = 1, ..., |x|), ct = X i=1,..,|x| ai,tbi (4) The attention weight are calculated by the bilinear scoring function and softmax normalization, ai,t = exp hT t Wbbi  P j exp hT t Wbbj  (5) To get the sentence encoder’s output for initialization of decoder hidden state, we concatenate last hidden state of the forward and backward pass, namely, s = [−−→ b|x|; ←− b1]. Paragraph encoder: Given sentence x, we want to encode the paragraph containing x. Since in practice the paragraph is very long, we set a length threshold L, and truncate the paragraph at the Lth token. We call the truncated paragraph “paragraph” henceforth. Denoting the paragraph as z, we use another bidirectional LSTM to encode z, −→ dt = −−−−→ LSTM3  zt, −−→ dt−1  ←− dt = ←−−−− LSTM3  zt, ←−− dt+1  With the last hidden state of the forward and backward pass, we use the concatenation [−→ d|z|; ←− d1] as the paragraph encoder’s output s′. 4.3 Training and Inference Giving a training corpus of sentence-question pairs: S = x(i), y(i) S i=1, our models’ training objective is to minimize the negative loglikelihood of the training data with respect to all the parameters, as denoted by θ, L = − S X i=1 log P  y(i)|x(i); θ  = − S X i=1 |y(i)| X j=1 log P  y(i) j |x(i), y(i) <j; θ  Once the model is trained, we do inference using beam search. The beam search is parametrized by the possible paths number k. As there could be many rare words in the input sentence that are not in the target side dictionary, during decoding many UNK tokens will be output. Thus, post-processing with the replacement of UNK is necessary. Unlike Luong et al. (2015b), we use a simpler replacing strategy for our task. For the decoded UNK token at time step t, we replace it with the token in the input sentence with the highest attention score, the index of which is arg maxi ai,t. 5 Experimental Setup We experiment with our neural question generation model on the processed SQuAD dataset. In this section, we firstly describe the corpus of the task. We then give implementation details of our neural generation model, the baselines to compare, and their experimental settings. Lastly, we introduce the evaluation methods by automatic metrics and human raters. 5.1 Dataset With the SQuAD dataset (Rajpurkar et al., 2016), we extract sentences and pair them with the ques1345 0 2000 4000 6000 8000 10000 12000 14000 16000 # sentence-question pairs < 10 (10, 20] (20, 30] (30, 40] (40, 50] (50, 60] (60, 70] (70, 80] (80, 90] (90, 100] non-stop-words overlap (%) Figure 2: Overlap percentage of sentence-question pairs in training set. y-axis is # non-stop-words overlap with respect to the total # tokens in the question (a percentage); x-axis is # sentencequestion pairs for a given overlap percentage range. tions. We train our models with the sentencequestion pairs. The dataset contains 536 articles with over 100k questions posed about the articles. The authors employ Amazon Mechanical Turks crowd-workers to create questions based on the Wikipedia articles. Workers are encouraged to use their own words without any copying phrases from the paragraph. Later, other crowd-workers are employed to provide answers to the questions. The answers are spans of tokens in the passage. Since there is a hidden part of the original SQuAD that we do not have access to, we treat the accessible parts (∼90%) as the entire dataset henceforth. We first run Stanford CoreNLP (Manning et al., 2014) for pre-processing: tokenization and sentence splitting. We then lower-case the entire dataset. With the offset of the answer to each question, we locate the sentence containing the answer and use it as the input sentence. In some cases (< 0.17% in training set), the answer spans two or more sentences, and we then use the concatenation of the sentences as the input “sentence”. Figure 2 shows the distribution of the token overlap percentage of the sentence-question pairs. Although most of the pairs have over 50% overlap rate, about 6.67% of the pairs have no nonstop-words in common, and this is mostly because of the answer offset error introduced during annotation. Therefore, we prune the training set based on the constraint: the sentence-question pair must have at least one non-stop-word in common. Lastly we add <SOS> to the beginning of the sen# pairs (Train) 70484 # pairs (Dev) 10570 # pairs (Test) 11877 Sentence: avg. tokens 32.9 Question: avg. tokens 11.3 Avg. # questions per sentence 1.4 Table 1: Dataset (processed) statistics. Sentence average # tokens, question average # tokens, and average # questions per sentence statistics are from training set. These averages are close to the statistics on development set and test set. tences, and <EOS> to the end of them. We randomly divide the dataset at the articlelevel into a training set (80%), a development set (10%), and a test set (10%). We report results on the 10% test set. Table 1 provides some statistics on the processed dataset: there are around 70k training samples, the sentences are around 30 tokens, and the questions are around 10 tokens on average. For each sentence, there might be multiple corresponding questions, and, on average, there are 1.4 questions for each sentence. 5.2 Implementation Details We implement our models 2 in Torch7 3 on top of the newly released OpenNMT system (Klein et al., 2017). For the source side vocabulary V, we only keep the 45k most frequent tokens (including <SOS>, <EOS> and placeholders). For the target side vocabulary U, similarly, we keep the 28k most frequent tokens. All other tokens outside the vocabulary list are replaced by the UNK symbol. We choose word embedding of 300 dimensions and use the glove.840B.300d pre-trained embeddings (Pennington et al., 2014) for initialization. We fix the word representations during training. We set the LSTM hidden unit size to 600 and set the number of layers of LSTMs to 2 in both the encoder and the decoder. Optimization is performed using stochastic gradient descent (SGD), with an initial learning rate of 1.0. We start halving the learning rate at epoch 8. The mini-batch size for the update is set at 64. Dropout with probability 2The code is available at https://github.com/ xinyadu/nqg. 3http://torch.ch/ 1346 Model BLEU 1 BLEU 2 BLEU 3 BLEU 4 METEOR ROUGEL IRBM25 5.18 0.91 0.28 0.12 4.57 9.16 IREdit Distance 18.28 5.48 2.26 1.06 7.73 20.77 MOSES+ 15.61 3.64 1.00 0.30 10.47 17.82 DirectIn 31.71 21.18 15.11 11.20 14.95 22.47 H&S 38.50 22.80 15.52 11.18 15.95 30.98 Vanilla seq2seq 31.34 13.79 7.36 4.26 9.88 29.75 Our model (no pre-trained) 41.00 23.78 15.71 10.80 15.17 37.95 Our model (w/ pre-trained) 43.09 25.96 17.50 12.28 16.62 39.75 + paragraph 42.54 25.33 16.98 11.86 16.28 39.37 Table 2: Automatic evaluation results of different systems by BLEU 1–4, METEOR and ROUGEL. For a detailed explanation of the baseline systems, please refer to Section 5.3. The best performing system for each column is highlighted in boldface. Our system which encodes only sentence with pre-trained word embeddings achieves the best performance across all the metrics. 0.3 is applied between vertical LSTM stacks. We clip the gradient when the its norm exceeds 5. All our models are trained on a single GPU. We run the training for up to 15 epochs, which takes approximately 2 hours. We select the model that achieves the lowest perplexity on the dev set. During decoding, we do beam search with a beam size of 3. Decoding stops when every beam in the stack generates the <EOS> token. All hyperparameters of our model are tuned using the development set. The results are reported on the test set. 5.3 Baselines To prove the effectiveness of our system, we compare it to several competitive systems. Next, we briefly introduce their approaches and the experimental setting to run them for our problem. Their results are shown in Table 2. IR stands for our information retrieval baselines. Similar to Rush et al. (2015), we implement the IR baselines to control memorizing questions from the training set. We use two metrics to calculate the distance between a question and the input sentence, i.e., BM-25 (Robertson and Walker, 1994) and edit distance (Levenshtein, 1966). According to the metric, the system retrieves the training set to find the question with the highest score. MOSES+ (Koehn et al., 2007) is a widely used phrase-based statistical machine translation system. Here, we treat sentences as source language text, we treat questions as target language text, and we perform the translation from sentences to questions. We train a tri-gram language model on target side texts with KenLM (Heafield et al., 2013), and tune the system with MERT on dev set. Performance results are reported on the test set. DirectIn is an intuitive yet meaningful baseline in which the longest sub-sentence of the sentence is directly taken as the predicted question. 4 To split the sentence into sub-sentences, we use a set of splitters, i.e., {“?”, “!”, “,”, “.”, “;”}. H&S is the rule-based overgenerate-and-rank system that was mentioned in Section 2. When running the system, we set the parameter just-wh true (to restrict the output of the system to being only wh-questions) and set max-length equal to the longest sentence in the training set. We also set downweight-pro true, to down weight questions with unresolved pronouns so that they appear towards the end of the ranked list. For comparison with our systems, we take the top question in the ranked list. Seq2seq (Sutskever et al., 2014) is a basic encoder-decoder sequence learning system for machine translation. We implement their model in Tensorflow. The input sequence is reversed before training or translating. Hyperparameters are tuned with dev set. We select the model with the lowest perplexity on the dev set. 4We also tried using the entire input sentence as the prediction output, but the performance is worse than taking subsentence as the prediction, across all the automatic metrics except for METEOR. 1347 Naturalness Difficulty Best % Avg. rank H&S 2.95 1.94 20.20 2.29 Ours 3.36 3.03* 38.38* 1.94** Human 3.91 2.63 66.42 1.46 Table 3: Human evaluation results for question generation. Naturalness and difficulty are rated on a 1–5 scale (5 for the best). Two-tailed ttest results are shown for our method compared to H&S (statistical significance is indicated with ∗(p < 0.005), ∗∗(p < 0.001)). 5.4 Automatic Evaluation We use the evaluation package released by Chen et al. (2015), which was originally used to score image captions. The package includes BLEU 1, BLEU 2, BLEU 3, BLEU 4 (Papineni et al., 2002), METEOR (Denkowski and Lavie, 2014) and ROUGEL (Lin, 2004) evaluation scripts. BLEU measures the average n-gram precision on a set of reference sentences, with a penalty for overly short sentences. BLEU-n is BLEU score that uses up to n-grams for counting co-occurrences. METEOR is a recall-oriented metric, which calculates the similarity between generations and references by considering synonyms, stemming and paraphrases. ROUGE is commonly employed to evaluate n-grams recall of the summaries with goldstandard sentences as references. ROUGEL (measured based on longest common subsequence) results are reported. 5.5 Human Evaluation We also perform human evaluation studies to measure the quality of questions generated by our system and the H&S system. We consider two modalities: naturalness, which indicates the grammaticality and fluency; and difficulty, which measures the sentence-question syntactic divergence and the reasoning needed to answer the question. We randomly sampled 100 sentence-question pairs. We ask four professional English speakers to rate the pairs in terms of the modalities above on a 1–5 scale (5 for the best). We then ask the human raters to give a ranking of the questions according to the overall quality, with ties allowed. 6 Results and Analysis Table 2 shows automatic metric evaluation results for our models and baselines. Our model which only encodes sentence-level information achieves Sentence 1: the largest of these is the eldon square shopping centre , one of the largest city centre shopping complexes in the uk . Human: what is one of the largest city center shopping complexes in the uk ? H&S: what is the eldon square shopping centre one of ? Ours: what is one of the largest city centers in the uk ? Sentence 2: free oxygen first appeared in significant quantities during the paleoproterozoic eon -lrb- between 3.0 and 2.3 billion years ago -rrb- . Human: during which eon did free oxygen begin appearing in quantity ? H&S: what first appeared in significant quantities during the paleoproterozoic eon ? Ours: how long ago did the paleoproterozoic exhibit ? Sentence 3: inflammation is one of the first responses of the immune system to infection . Human: what is one of the first responses the immune system has to infection ? H&S: what is inflammation one of ? Ours: what is one of the first objections of the immune system to infection ? Sentence 4: tea , coffee , sisal , pyrethrum , corn , and wheat are grown in the fertile highlands , one of the most successful agricultural production regions in Africa. Human: (1) where is the most successful agricultural prodcution regions ? (2) what is grown in the fertile highlands ? H&S: what are grown in the fertile highlands in africa ? Ours: what are the most successful agricultural production regions in africa ? Sentence 5: as an example , income inequality did fall in the united states during its high school movement from 1910 to 1940 and thereafter . Human: during what time period did income inequality decrease in the united states ? H&S: where did income inequality do fall during its high school movement from 1910 to 1940 and thereafter as an example ? Ours: when did income inequality fall in the us ? Sentence 6: however , the rainforest still managed to thrive during these glacial periods , allowing for the survival and evolution of a broad diversity of species . Human: did the rainforest managed to thrive during the glacial periods ? H&S: what are treaties establishing european union ? Ours: why do the birds still grow during glacial periods ? Sentence 7: maududi founded the jamaat-e-islami party in 1941 and remained its leader until 1972. Human: when did maududi found the jamaat-e-islami party ? H&S: who did maududi remain until 1972 ? Ours: when was the jamaat-e-islami party founded ? Figure 3: Sample output questions generated by human (ground truth questions), our system and the H&S system. 1348 Category (%) H&S Ours Ours + paragraph BLEU-3 BLEU-4 METEOR BLEU-3 BLEU-4 METEOR BLEU-3 BLEU-4 METEOR w/ sentence 70.23 (243) 20.64 15.81 16.76 24.45 17.63 17.82 24.01 16.39 19.19 w/ paragraph 19.65 (68) 6.34 < 0.01 10.74 3.76 < 0.01 11.59 7.23 4.13 12.13 All* 100 (346) 19.97 14.95 16.68 23.63 16.85 17.62 24.68 16.33 19.61 Table 4: An estimate of categories of questions of the processed dataset and per-category performance comparison of the systems. The estimate is based on our analysis of the 346 pairs from the dev set. Categories are decided by the information needed to generate the question. Bold numbers represent the best performing method for a given metric. ∗Here, we leave out performance results for “w/ article” category (2 samples, 0.58%) and “not askable” category (33 samples, 9.54%). the best performance across all metrics. We note that IR performs poorly, indicating that memorizing the training set is not enough for the task. The baseline DirectIn performs pretty well on BLEU and METEOR, which is reasonable given the overlap statistics between the sentences and the questions (Figure 2). H&S system’s performance is on a par with DirectIn’s, as it basically performs syntactic change without paraphrasing, and the overlap rate is also high. Looking at the performance of our three models, it’s clear that adding the pre-trained embeddings generally helps. While encoding the paragraph causes the performance to drop a little, this makes sense because, apart from useful information, the paragraph also contains much noise. Table 3 shows the results of the human evaluation. We see that our system outperforms H&S in all modalities. Our system is ranked best in 38.4% of the evaluations, with an average ranking of 1.94. An inter-rater agreement of Krippendorff’s Alpha of 0.236 is achieved for the overall ranking. The results imply that our model can generate questions of better quality than the H&S system. For our qualitative analysis, we examine the sample outputs and the visualization of the alignment between the input and the output. In Figure 3, we present sample questions generated by H&S and our best model. We see a large gap between our results and H&S’s. For example, in the first sample, in which the focus should be put on “the largest.” Our model successfully captures this information, while H&S only performs some syntactic transformation over the input without paraphrasing. However, outputs from our system are not always “perfect”, for example, in pair 6, our system generates a question about the reason why birds still grow, but the most related question would be why many species still grow. But from when was the first teletext service introduced ? <EOS> . 1974 in starting , service teletext first the , ceefax introduced also bbc the 0.2 0.4 0.6 0.8 Figure 4: Heatmap of the attention weight matrix, which shows the soft alignment between the sentence (left) and the generated question (top). a different perspective, our question is more challenging (readers need to understand that birds are one kind of species), which supports our system’s performance listed in human evaluations (See Table 3). It would be interesting to further investigate how to interpret why certain irrelavant words are generated in the question. Figure 4 shows the attention weights (αi,t) for the input sentence when generating each token in the question. We see that the key words in the output (“introduced”, “teletext”, etc.) aligns well with those in the input sentence. Finally, we do a dataset analysis and finegrained system performance analysis. We randomly sampled 346 sentence-question pairs from the dev set and label each pair with a category. 5 The four categories are determined by how much information is needed to ask the question. To be specific, “w/ sentence” means it only requires 5The IDs of the questions examined will be made available at https://github.com/xinyadu/nqg/ blob/master/examined-question-ids.txt. 1349 the sentence to ask the question; “w/ paragraph” means it takes other information in the paragraph to ask the question; “w/ article” is similar to “w/ paragraph”; and “not askable” means that world knowledge is needed to ask the question or there is mismatch of sentence and question caused by annotation error. Table 4 shows the per-category performance of the systems. Our model which encodes paragraph information achieves the best performance on the questions of “w/ paragraph” category. This verifies the effectiveness of our paragraph-level model on the questions concerning information outside the sentence. 7 Conclusion and Future Work We have presented a fully data-driven neural networks approach to automatic question generation for reading comprehension. We use an attentionbased neural networks approach for the task and investigate the effect of encoding sentence- vs. paragraph-level information. Our best model achieves state-of-the-art performance in both automatic evaluations and human evaluations. Here we point out several interesting future research directions. Currently, our paragraph-level model does not achieve best performance across all categories of questions. We would like to explore how to better use the paragraph-level information to improve the performance of QG system regarding questions of all categories. Besides this, it would also be interesting to consider to incorporate mechanisms for other language generation tasks (e.g., copy mechanism for dialogue generation) in our model to further improve the quality of generated questions. Acknowledgments We thank the anonymous ACL reviewers, Kai Sun and Yao Cheng for their helpful suggestions. We thank Victoria Litvinova for her careful proofreading. We also thank Xanda Schofield, Wil Thomason, Hubert Lin and Junxian He for doing the human evaluations. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations Workshop (ICLR). Danqi Chen, Jason Bolton, and Christopher D. Manning. 2016. A thorough examination of the cnn/daily mail reading comprehension task. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 2358–2367. http://www.aclweb.org/anthology/P16-1223. Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, and C Lawrence Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325 . Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Doha, Qatar, pages 1724–1734. http://www.aclweb.org/anthology/D141179. Sumit Chopra, Michael Auli, and Alexander M. Rush. 2016. Abstractive sentence summarization with attentive recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California, pages 93–98. http://www.aclweb.org/anthology/N16-1012. Kenneth Mark Colby, Sylvia Weber, and Franklin Dennis Hilf. 1971. Artificial paranoia. Artificial Intelligence 2(1):1–25. https://doi.org/10.1016/00043702(71)90002-6. Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the Ninth Workshop on Statistical Machine Translation. Association for Computational Linguistics, Baltimore, Maryland, USA, pages 376– 380. http://www.aclweb.org/anthology/W14-3348. Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn. 2013. Scalable modified kneser-ney language model estimation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Sofia, Bulgaria, pages 690–696. http://www.aclweb.org/anthology/P13-2121. Michael Heilman and Noah A. Smith. 2010. Good question! statistical ranking for question generation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, Los Angeles, California, pages 609–617. http://www.aclweb.org/anthology/N10-1086. 1350 Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems (NIPS). pages 1693–1701. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2016. Summarizing source code using a neural attention model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 2073–2083. http://www.aclweb.org/anthology/P16-1195. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M. Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. ArXiv e-prints . Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions. Association for Computational Linguistics, Stroudsburg, PA, USA, pages 177–180. http://dl.acm.org/citation.cfm?id=1557769.1557821. Igor Labutov, Sumit Basu, and Lucy Vanderwende. 2015. Deep questions without deep understanding. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 889–898. http://www.aclweb.org/anthology/P15-1086. Vladimir I Levenshtein. 1966. Binary codes capable of correcting deletions, insertions and reversals. In Soviet physics doklady. volume 10, page 707. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Stan Szpakowicz Marie-Francine Moens, editor, Text Summarization Branches Out: Proceedings of the ACL-04 Workshop. Association for Computational Linguistics, Barcelona, Spain, pages 74–81. http://aclweb.org/anthology/W/W04/W041013.pdf. David Lindberg, Fred Popowich, John Nesbit, and Phil Winne. 2013. Generating natural language questions to support learning on-line. In Proceedings of the 14th European Workshop on Natural Language Generation. Association for Computational Linguistics, Sofia, Bulgaria, pages 105–114. http://www.aclweb.org/anthology/W13-2114. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015a. Effective approaches to attentionbased neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 1412– 1421. http://aclweb.org/anthology/D15-1166. Thang Luong, Ilya Sutskever, Quoc Le, Oriol Vinyals, and Wojciech Zaremba. 2015b. Addressing the rare word problem in neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 11–19. http://www.aclweb.org/anthology/P15-1002. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny F., Steven B., and David M. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Association for Computational Linguistics, Baltimore, Maryland, pages 55–60. http://www.aclweb.org/anthology/P14-5010. Karen Mazidi and Rodney D. Nielsen. 2014. Linguistic considerations in automatic question generation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Baltimore, Maryland, pages 321–326. http://www.aclweb.org/anthology/P14-2053. Ruslan Mitkov and Le An Ha. 2003. Computeraided generation of multiple-choice tests. In Jill Burstein and Claudia Leacock, editors, Proceedings of the HLT-NAACL 03 Workshop on Building Educational Applications Using Natural Language Processing. pages 17–22. http://www.aclweb.org/anthology/W03-0203.pdf. Nasrin Mostafazadeh, Ishan Misra, Jacob Devlin, Margaret Mitchell, Xiaodong He, and Lucy Vanderwende. 2016. Generating natural questions about an image. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1802– 1813. http://www.aclweb.org/anthology/P16-1170. Jack Mostow and Wei Chen. 2009. Generating instruction automatically for the reading strategy of selfquestioning. In Proceedings of the 2nd Workshop on Question Generation (AIED 2009). pages 465–472. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine 1351 reading comprehension dataset. arXiv preprint arXiv:1611.09268 . Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Philadelphia, Pennsylvania, USA, pages 311–318. https://doi.org/10.3115/1073083.1073135. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Doha, Qatar, pages 1532–1543. http://www.aclweb.org/anthology/D14-1162. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Austin, Texas, pages 2383–2392. https://aclweb.org/anthology/D161264. Matthew Richardson, Christopher J.C. Burges, and Erin Renshaw. 2013. MCTest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Seattle, Washington, USA, pages 193–203. http://www.aclweb.org/anthology/D13-1020. Stephen E. Robertson and Steve Walker. 1994. Some simple effective approximations to the 2-poisson model for probabilistic weighted retrieval. In Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. Springer-Verlag New York, Inc., New York, NY, USA, SIGIR ’94, pages 232–241. http://dl.acm.org/citation.cfm?id=188490.188561. Vasile Rus, Brendan Wyse, Paul Piwek, Mihai Lintean, Svetlana Stoyanchev, and Cristian Moldovan. 2010. The first question generation shared task evaluation challenge. In Proceedings of the 6th International Natural Language Generation Conference. Association for Computational Linguistics, Stroudsburg, PA, USA, pages 251–257. http://dl.acm.org/citation.cfm?id=1873738.1873777. Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 379–389. http://aclweb.org/anthology/D15-1044. Iulian Vlad Serban, Alberto García-Durán, Caglar Gulcehre, Sungjin Ahn, Sarath Chandar, Aaron Courville, and Yoshua Bengio. 2016. Generating factoid questions with recurrent neural networks: The 30m factoid question-answer corpus. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 588–598. http://www.aclweb.org/anthology/P16-1056. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems (NIPS). pages 3104–3112. Lucy Vanderwende. 2008. The importance of being important: Question generation. In Proceedings of the 1st Workshop on the Question Generation Shared Task Evaluation Challenge, Arlington, VA. Joseph Weizenbaum. 1966. Eliza&mdash;a computer program for the study of natural language communication between man and machine. Commun. ACM 9(1):36–45. https://doi.org/10.1145/365153.365168. Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merriënboer, Armand Joulin, and Tomas Mikolov. 2016. Towards ai-complete question answering: A set of prerequisite toy tasks. In International Conference on Learning Representations Workshop (ICLR). Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C Courville, Ruslan Salakhutdinov, Richard S Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In ICML. volume 14, pages 77–81. 1352
2017
123
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1353–1363 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1124 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1353–1363 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1124 Joint Optimization of User-desired Content in Multi-document Summaries by Learning from User Feedback Avinesh P.V.S and Christian M. Meyer Research Training Group AIPHES and UKP Lab Computer Science Department, Technische Universit¨at Darmstadt www.aiphes.tu-darmstadt.de, www.ukp.tu-darmstadt.de Abstract In this paper, we propose an extractive multi-document summarization (MDS) system using joint optimization and active learning for content selection grounded in user feedback. Our method interactively obtains user feedback to gradually improve the results of a state-of-the-art integer linear programming (ILP) framework for MDS. Our methods complement fully automatic methods in producing highquality summaries with a minimum number of iterations and feedbacks. We conduct multiple simulation-based experiments and analyze the effect of feedbackbased concept selection in the ILP setup in order to maximize the user-desired content in the summary. 1 Introduction The task of producing summaries from a cluster of multiple topic-related documents has gained much attention during the Document Understanding Conference1 (DUC) and the Text Analysis Conference2 (TAC) series. Despite a lot of research in this area, it is still a major challenge to automatically produce summaries that are on par with human-written ones. To a large extent, this is due to the complexity of the task: a good summary must include the most relevant information, omit redundancy and irrelevant information, satisfy a length constraint, and be cohesive and grammatical. But an even bigger challenge is the high degree of subjectivity in content selection, as it can be seen in the small overlap of what is considered 1http://duc.nist.gov/ 2http://www.nist.gov/tac/ important by different users. Optimizing a system towards one single best summary that fits all users, as it is assumed by current state-of-the-art systems, is highly impractical and diminishes the usefulness of a system for real-world use cases. In this paper, we propose an interactive conceptbased model to assist users in creating a personalized summary based on their feedback. Our model employs integer linear programming (ILP) to maximize user-desired content selection while using a minimum amount of user feedback and iterations. In addition to the joint optimization framework using ILP, we explore pool-based active learning to further reduce the required feedback. Although there have been previous attempts to assist users in single-document summarization, no existing work tackles the problem of multi-document summaries using optimization techniques for user feedback. Additionally, most existing systems produce only a single, globally optimal solution. Instead, we put the human in the loop and create a personalized summary that better captures the users’ needs and their different notions of importance. Need for personalization. Table 1 shows the ROUGE scores (Lin, 2004) of multiple existing summarization systems, namely TF*IDF (Luhn, 1958), LexRank (Erkan and Radev, 2004), TextRank (Mihalcea and Tarau, 2004), LSA (Gong and Liu, 2001), KL-Greedy (Haghighi and Vanderwende, 2009), provided by the sumy package3 and ICSI4 (Gillick and Favre, 2009; Boudin et al., 2015), a strong state-of-the-art approach (Hong et al., 2014) in comparison to the extractive upper bound on DUC’04 and DBS. DUC’04 is an English dataset of abstractive summaries from ho3https://github.com/miso-belica/sumy 4https://github.com/boudinfl/sume 1353 Figure 1: Lexical overlap of a reference summary (cluster D31043t in DUC 2004) with the summary produced by ICSI’s state-of-the-art system (Boudin et al., 2015) and the extractive upper bound DUC’04 DBS Systems R1 R2 SU4 R1 R2 SU4 TF*IDF .292 .055 .086 .377 .144 .144 LexRank .345 .070 .108 .434 .161 .180 TextRank .306 .057 .096 .400 .167 .167 LSA .294 .045 .081 .394 .154 .147 KL-Greedy .336 .072 .104 .369 .133 .134 ICSI .374 .090 .118 .452 .183 .190 UB .472 .210 .182 .848 .750 .532 Table 1: ROUGE-1 (R1), ROUGE-2 (R2), and ROUGE-SU4 (SU4) scores of multiple systems compared to the extractive upper bound (UB) mogenous news texts, whereas DBS (Benikova et al., 2016) is a German dataset of cohesive extracts from heterogeneous sources from the educational domain (see details in section 4.1). For each dataset, we compute an extractive upper bound (UB) by optimizing the sentence selection which maximizes ROUGE-2, i.e., the occurrence of bigrams as in the reference summary (Cao et al., 2016). Although some systems achieve state-ofthe-art performance, their scores are still far from the extractive upper bound of individual reference summaries as shown in Figure 1. This is due to low inter-annotator agreement for concept selection: Zechner (2002) reports, for example, κ = .13 and Benikova et al. (2016) κ = .23. Most systems try to optimize for all reference summaries instead of personalizing, which we consider essential to capture user-desired content. Need for user feedback. The goal of concept selection is finding the important information within a given set of source documents. Although existing summarization algorithms come up with a generic notion of importance, it is still far from the user-specific importance as shown in Figure 1. In contrast, humans can easily assess importance given a topic or a query. One way to achieve personalized summarization is thus by combining the advantages of both human feedback and the generic notion of importance built in a system. This allows users to interactively steer the summarization process and integrate their user-specific notion of importance. Contributions. In this work, (1) we propose a novel ILP-based model using an interactive loop to create multi-document user-desired summaries, and (2) we develop models using pool-based active learning and joint optimization techniques to collect user feedback on identifying important concepts of a topic. In order to encourage the community to advance research and replicate our results, we provide our interactive summarizer implementation as open-source software.5. Our proposed method and our new interactive summarization framework can be used in multiple application scenarios: as an interactive annotation tool, which highlights important sentences for the annotators, as a journalistic writing aid that suggests important, user-adapted content from multiple source feeds (e.g., live blogs), and as a medical data analysis tool that suggests key information assisting a patient’s personalized medical diagnosis. The rest of the paper is structured as follows: In section 2, we discuss related work. Section 3 5https://github.com/UKPLab/ acl2017-interactive_summarizer 1354 introduces our computer-assisted summarization framework using the concept-based optimization. Section 4 describes our experiment data and setup. In section 5, we then discuss our results and analyze the performance of our models across different datasets. Finally, we conclude the paper in section 6 and discuss future work. 2 Related Work Previous works related to our research address extractive summarization as a budgeted subset selection problem, computer-assisted approaches, and personalized summarization models. Bugeted subset selection. Extractive summarization systems that compose a summary from a number of important sentences from the source documents are by far the most popular solution for MDS. This task can be modeled as a budgeted maximum coverage problem. Given a set of sentences in the document collection, the task is to maximize the coverage of the subset of sentences under a length constraint. The scoring function estimates the importance of the content units for a summary. Most previous works consider sentences as content units and try different scoring functions to optimize the summary. One of the earliest systems by McDonald (2007) models a scoring function by simultaneously maximizing the relevance scores of the selected content units and minimizing their pairwise redundancy scores. They solve the global optimization problem using an ILP framework. Later, several state-of-the-art results employed an ILP to maximize the number of relevant concepts in the created summary: Gillick and Favre (2009) use an ILP with bigrams as concepts and hand-coded deletion rules for compression. Berg-Kirkpatrick et al. (2011) combine grammatical features relating to the parse tree and use a maximum-margin SVM trained on annotated gold-standard compressions. Woodsend and Lapata (2012) jointly optimize content selection and surface realization, Li et al. (2013) estimate the weights of the concepts using supervised methods, and Boudin et al. (2015) propose an approximation algorithm to achieve the optimal solution. Although these approaches achieve state-of-the-art performance, they produce only one globally optimal summary which is impractical for various users due to the subjectivity of the task. Therefore, we research interactive computer-assisted approaches in order to produce personalized summaries. Computer-assisted summarization. The majority of the existing computer-assisted summarization tools (Craven, 2000; Narita et al., 2002; Orˇasan et al., 2003; Orˇasan and Hasler, 2006) present important elements of a document to the user. Creating a summary then requires the human to cut, paste, and reorganize the important elements in order to formulate a final text. The work by Orˇasan and Hasler (2006) is closely related to ours, since they assist users in creating summaries for a source document based on the output of a given automatic summarization system. However, their system is neither interactive nor does it consider the user’s feedback in any way. Instead, they suggest the output of the state-of-the-art (singledocument) summarization method as a summary draft and ask the user to construct the summary without further interaction. Personalized summarization. While most previous work focuses on generic summaries, there have been a few attempts to take a user’s preferences into account. The study by Berkovsky et al. (2008) shows that users prefer personalized summaries that precisely reflect their interests. These interests are typically modeled with the help of a query (Park and An, 2010) or keyword annotations reflecting the user’s opinions (Zhang et al., 2003). In another strand of research, D´ıaz and Gerv´as (2007) create user models based on social tagging and Hu et al. (2012) rank sentences by combining informativeness scores with a user’s interests based on fuzzy clustering of social tags. Extending the use of social content, another recent work showed how personalized review summaries (Poussevin et al., 2015) can be useful in recommender systems beyond rating predictions. Although these approaches show that personalized summaries are more useful than generic summaries, they do not attempt to iteratively refine a summary in an interactive user–system dialog. 3 Approach The goal of our work is maximizing the userdesired content in a summary within a minimum number of iterations. To this end, we propose an interactive loop that alternates the automatic creation of a summary and the acquisition of user feedback to refine the next iteration’s summary. 1355 3.1 Summary Creation Our starting point is the concept-based ILP summarization framework by Boudin et al. (2015). Let C be the set of concepts in a given set of source documents D, ci the presence of the concept i in the resulting summary, wi a concept’s weight, ℓj the length of sentence j, sj the presence of sentence j in the summary, and Occij the occurrence of concept i in sentence j. Based on these definitions, we formulate the following ILP: max P iwici (1) ∀j. P jℓjsj ≤L (2) ∀i, j. P jsjOccij ≥ci (3) ∀i, j. sjOccij ≤ci (4) ∀i. ci ∈{0, 1} (5) ∀j. sj ∈{0, 1} (6) The objective function (1) maximizes the occurrence of concepts ci in the summary based on their weights wi. The constraint formalized in (2) ensures that the summary length is restricted to a maximum length L, (3) ensures the selection of all concepts in a sentence sj if sj has been selected for the summary. Constraint (4) ensures that a concept is only selected if it is present in at least one of the selected sentences. The two key factors for the performance of this ILP are defining the concept set C and a method to estimate the weights wi ∈W. Previous works have used word bigrams as concepts (Gillick and Favre, 2009; Li et al., 2013; Boudin et al., 2015) and either use document frequency (i.e. the number of source documents containing the concept) as weights (Woodsend and Lapata, 2012; Gillick and Favre, 2009) or estimate them using a supervised regression model (Li et al., 2013). For our implementation, we likewise use bigrams as concepts and document frequency as weights, as Boudin et al. (2015) report good results with this simple strategy. Our approach is, however, not limited to this setup, as our interactive approach allows for any definition of C and W, including potentially more sophisticated weight estimation methods, e.g., based on deep neural networks. In section 5.2, we additionally analyze how other notions of concepts can be integrated into our approach. 3.2 Interactive Summarization Loop Algorithm 1 provides an overview of our interactive summarization approach. The system takes the set of source documents D as input, derives the set of concepts C, and initializes their weights W. In line 5, we start the interactive feedback loop iterating over t = 0, . . . , T. We first create a summary St (line 6) by solving the ILP and then extract a set of concepts Qt (line 7), for which we query the user in line 11 As the user feedback in the current time step, we use the concepts It ⊆Qt that have been considered important by the user. For updating the weights W in line 12, we may use all feedback collected until the current time step t, i.e., It 0 = St j=0 Ij and the set of concepts Qt 0 = St j=0 Qj seen by the user (with Q−1 0 = ∅). If there are no more concepts to query (i.e., Qt = ∅), we stop the iteration and return the personalized summary St. Algorithm 1 Interactive summarizer 1: procedure INTERACTIVESUMMARIZER() 2: input: Documents D 3: C ←extractConcepts(D) 4: W ←conceptWeights(C) 5: for t = 0...T do 6: St ←getSummary(C, W) 7: Qt ←extractConcepts(St) −Qt−1 0 8: if Qt = ∅then 9: return St 10: else 11: It ←obtainFeedback(St, Qt) 12: W ←updateWeights(W, It 0, Qt 0) 13: end if 14: end for 15: end procedure 3.3 User Feedback Optimization To optimize the summary creation based on user feedback, we iteratively change the concept weights in the objective function of the ILP setup. We define the following models: Accept model (ACCEPT). This model presents the current summary St with highlighted concepts Qt to a user and asks him/her to select all important concepts It. We assign the maximum weight MAX to all concepts in It and consider the remaining Qt −It as unimportant by setting their weight to 0 (see equation 7 and 8). The intuition 1356 behind this baseline is that the modified scores cause the ILP to prefer the user-desired concepts while avoiding unimportant ones. ∀i ∈It 0. wi = MAX (7) ∀i ∈Qt 0 −It 0. wi = 0 (8) Joint ILP with User Feedback (JOINT). The ACCEPT model fails in cases where the user could not accept concepts that never appear in one of the St summaries. To tackle this, in our JOINT model, we change the objective function of the ILP in order to create St by jointly optimizing importance and user feedback. We thus replace the equation (1) with: max (P i̸∈Qt 0 wici −P i∈Qt 0 wici if t ≤τ P iwici if t > τ (9) Equation (9) maximizes the use of concepts for which we yet lack feedback (i ̸∈Qt 0) and minimizes the use of concepts for which we already have feedback (i ∈Qt 0). In this JOINT model, we use an exploration phase t = 0 . . . τ to collect the feedback, which terminates when the user does not return any important concepts (i.e., It = ∅). In the exploratory phase, the minus term in the equation 9 helps to reduce the score of the sentences whose concepts have received feedback already. In other words, it causes higher scores for sentences consisting of concepts which yet lack feedback. After the exploration step, we fall back to the original importance-based optimization function from equation (1). Active learning with uncertainty sampling (AL). Our JOINT model explores well in terms of prioritizing the concepts which yet lack user feedback. However, it gives equal probabilities to all the unseen concepts. The AL model employs pool-based active learning (Kremer et al., 2014) during the exploration phase in order to prioritize concepts for which the model is most uncertain. We distinguish the unlabeled concept pool Cu = {Φ(˜x1), Φ(˜x2), ..., Φ(˜xN)} and the labeled concept pool Cℓ= {(Φ(x1), y1), (Φ(x2), y2), . . . , (Φ(xN), yN)}, where each concept xi is represented as a d-dimensional feature vector Φ(xi) ∈ Rd. The labels yi ∈{−1, 1} are 1 for all important concepts in It 0 and −1 for all unimportant concepts in Qt 0 −It 0. Initially, the labeled concept pool Cℓ is small or empty, whereas the unlabeled concept pool Cu is relatively large. The learning algorithm is presented with a C = Cℓ∪Cu and is first called to learn a decision function f(0) : Rd → R, where the function f(0)(Φ(˜x)) is taken to predict the label of the input vector Φ(˜x). Then, in each tth iteration, where t = 1, 2, . . . , τ, the querying algorithm selects an instance of ˜xt ∈Cu for which the learning algorithm is least certain. Thus, our learning goal of active learning is to minimize the expected loss L (i.e., hinge loss) with limited querying opportunities to obtain a decision function f(1), f(2), . . . , f(τ) that can achieve low error rates: min E(Φ(x),y)∈Cℓ h L(f(t)(Φ(x)), y) i (10) As the learning algorithm, we use a support vector machine (SVM) with a linear kernel. To obtain the probability distribution over classes we use Platt’s calibration (Platt, 1999), an effective approach for transforming classification models into a probability distribution. Equation (11) shows the probability estimates for f(t), where f(t) is the uncalibrated output of the SVM in the tth iteration and A, B are scalar parameters that are learned by the calibration algorithm. The uncertainty scores are calculated as described in the equation (12) for all the concepts which lack feedback (Cu). p(y | f(t)) = 1 1 + exp(Af(t) + B) (11) ui = 1 − max y∈{−1,1} p(y | f(t)) (12) For our AL model, we now change the objective function in order to create St by multiplying uncertainty scores ui to the weights wi. We thus replace the objective function from (9) with max (P i̸∈Qt 0 uiwici if t ≤τ P i wici if t > τ (13) Active learning with positive sampling (AL+). One way to sample the unseen concepts is using uncertainty as in AL, but another way is to actively choose samples for which the learning algorithm predicts as a possible important concept. In AL+, we introduce the notion of certainty (1−ui) for the positively predicted samples (f(t)(Φ(˜xi)) = 1) in 1357 Dataset Lang Topics Summary type Length DBS de 10 Coherent extracts ≈500 words DUC’01 en 30 Abstracts 100 words DUC’02 en 59 Abstracts 100 words DUC’04 en 50 Abstracts 100 words Table 2: Statistics of the MDS datasets used the objective function (1) for producing St max (P i̸∈Qt 0 (1 −ui)ℓiwici if t ≤τ P i wici if t > τ (14) where ℓi = ( 0 if f(t)(Φ(˜xi)) = −1 1 if f(t)(Φ(˜xi)) = 1 (15) 4 Experimental Setup 4.1 Data For our experiments, we mainly focus on the DBS corpus, which is an MDS dataset of coherent extracts created from heterogeneous sources about multiple educational topics (Benikova et al., 2016). This corpus is well-suited for our evaluation setup, since we are able to easily simulate a user’s feedback based on the overlap between generated and reference summary. Additionally, we carry out experiments on the most commonly used evaluation corpora published by DUC/NIST from the generic multidocument summarization task carried out in DUC’01, DUC’02 and DUC’04. The documents are all from the news domain and are grouped into various topic clusters. Table 2 shows the properties of these corpora. For evaluating the summaries against the reference summary we use ROUGE (Lin, 2004) with the parameters suggested by (Owczarzak et al., 2012) yielding high correlation with human judgments (i.e., with stemming and without stopword removal).6 Since DBS summaries do not have a fixed length, we use a variable length parameter L for evaluation, where L denotes the length of the reference summary. All results are averaged across all topics and reference summaries. 4.2 Data Pre-processing and Features To pre-process the datasets, we perform tokenization and stemming with NLTK (Loper and Bird, 2002) and constituency parsing with the Stanford parser (Klein and Manning, 2003) for English and 6-n 4 -m -a -x -c 95 -r 1000 -f A -p 0.5 -t 0 -2 -4 -u German. The parse trees will be used in section 5.2 below to experiment with a syntactically motivated concept notion. As a concept’s feature representation Φ for our active learning setups AL and AL+, we use pre-trained word embeddings. We use the Google News embeddings with 300 dimensions by Mikolov et al. (2013) for English and the 100dimensional news- and Wikipedia-based embeddings by Reimers et al. (2014) for German. Additionally, we add TF*IDF, number of stop words, presence of named entities, and word capitalization as features. Discrete features, such as part-ofspeech tags, are mapped into the word representation via lookup tables. 4.3 Oracle-Based Simulation of User Feedback The presence of a human in the loop typically demands for a user study based evaluation, but to collect sufficient data for various settings of our models would be too expensive. Therefore, we resort to an oracle-based approach, where the oracle is a system simulating the user by generating the feedback based on reference outputs. This idea has been widely used in the development of interactive systems (Gonz´alez-Rubio et al., 2012; Knowles and Koehn, 2016) for studying the problem and exhibiting solutions in a theoretical and controlled environment. To simulate user feedback in our setting, we consider all concepts It ⊆Qt from the systemsuggested summary St as important if they are present in the reference summary. Let Ref be the set of concepts in the reference summary. In the tth iteration, we return It = Qt ∩Ref as the simulated user feedback. Thus, the goal of our system is to reach the upper bound for a user’s reference summary within a minimal number of iterations. We limit our experiments to ten iterations, since it appears unrealistic that users are willing to participate in more feedback cycles. Petrie and Bevan (2009) even report only three to five iterations. 5 Results and Analysis 5.1 Methods Table 3 shows the evaluation results of our four models. When evaluating a summarization system, it is common to report the mean ROUGE scores across clusters using all the reference summaries. However, since we aim at personalizing 1358 Datasets ICSI UB ACCEPT JOINT AL AL+ R1 R2 SU4 R1 R2 SU4 R1 R2 SU4 R1 R2 SU4 R1 R2 SU4 R1 R2 SU4 Concept Notion: Bigrams DBS .451 .183 .190 .848 .750 .532 .778 .654 .453 .815 .707 .484 .833 .729 .498 .828 .721 .500 DUC’04 .374 .090 .118 .470 .212 .185 .442 .176 .165 .444 .180 .166 .440 .178 .160 .427 .166 .154 DUC’02 .350 .085 .110 .474 .216 .187 .439 .178 .161 .444 .182 .165 .448 .188 .165 .448 .184 .170 DUC’01 .333 .073 .105 .450 .213 .181 .414 .171 .156 .418 .167 .149 .435 .186 .163 .426 .181 .158 Concept Notion: Content Phrases DBS .403 .135 .154 .848 .750 .532 .691 .531 .430 .742 .597 .419 .776 .652 .448 .767 .629 .440 DUC’04 .374 .090 .118 .470 .212 .185 .441 .176 .160 .441 .179 .162 .444 .180 .162 .422 .164 .150 DUC’02 .350 .085 .110 .474 .216 .187 .436 .181 .162 .444 .183 .165 .446 .185 .168 .442 .182 .162 DUC’01 .333 .073 .105 .450 .213 .181 .410 .165 .153 .417 .170 .156 .433 .182 .161 .420 .179 .154 Table 3: ROUGE-1 (R1), ROUGE-2 (R2) and ROUGE SU-4 (SU4) achieved by our models after the tenth iteration of the interactive loop in comparison to the upper bound and the basic ILP setup Datasets ACCEPT JOINT AL AL+ #F #F #F #F Concept Notion: Bigrams DBS 313 296 348 342 DUC’04 15 14 16 14 DUC’02 14 13 15 15 DUC’01 13 11 13 13 Concept Notion: Content Phrases DBS 110 114 133 145 DUC’04 8 9 10 10 DUC’02 7 7 8 6 DUC’01 7 7 8 6 Table 4: Average amount of user feedback (#F) considered by our models at the end of the tenth iteration of the interactive summarization loop the summary for an individual user, we evaluate our models based on the mean ROUGE scores across clusters per reference summary. In Table 4, we additionally evaluate the models based on the amount of feedback (#F = |IT 0 |) taken by the oracles to converge to the upper bound within ten iterations. To examine the system performance based on user feedback, we analyze our models’ performance on multiple datasets. The results in Table 3 show that our idea of interactive multi-document summarization allows users to steer a general summary towards a personalized summary consistently across all datasets. From the results, we can see that the AL model starts from the conceptbased ILP summarization and nearly reaches the upper bound for all the datasets within ten iterations. AL+ performs similar to AL in terms of ROUGE, but requires less feedback (compare Table 4). Furthermore, the ACCEPT and JOINT models get stuck in a local optimum due to the less exploratory nature of the models. 5.2 Concept Notion Our interactive summarization approach is based on the scalable global concept-based model which uses bigrams as concepts. Thus, it is intuitive to use bigrams for collecting user feedback as well.7 Although our models reach the upper bound when using bigram-based feedback, they require a significantly large number of iterations and much feedback to converge, as shown in Table 4. To reduce the amount of feedback, we also consider content phrases to collect feedback. That is, syntactic chunks from the constituency parse trees consisting of non-function words (i.e., nouns, verbs, adjectives, and adverbs). For DBS being extractive dataset, we use bigrams and content phrases as concepts, both for the objective function in equation (1) and as feedback items, whereas for the DUC datasets, the concepts are always bigrams for both the feedback types (bigrams/content phrases). For DUC being abstractive, in the case of feedback given on content phrases, they are projected back to the bigrams to change the concept weights in order to have more overlap of simulated feedback. Table 4 shows feedbacks based on the content phrases reduces the number of feedbacks by a factor of 2. Furthermore, when content phrases are used as concepts for DBS, the performance of the models is lower compared to bigrams, as seen in Table 3. 5.3 Datasets Figure 2 compares the ROUGE-2 scores and the amount of feedback used over time when applied to the DBS and the DUC’04 corpus. We can see from the figure that all models show an improvement of +.45 ROUGE-2 after merely 4 iterations 7We prune bigrams consisting of only functional words. 1359 0 1 2 3 4 5 6 7 8 9 10 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 ROUGE 2 Upper bound Active+ Active Joint Accept 0 1 2 3 4 5 6 7 8 9 10 # Iterations 0 50 100 150 200 250 300 350 # Feedbacks 0 1 2 3 4 5 6 7 8 9 10 0.08 0.10 0.12 0.14 0.16 0.18 0.20 0.22 ROUGE 2 Upper bound Active+ Active Joint Accept 0 1 2 3 4 5 6 7 8 9 10 # Iterations 0 2 4 6 8 10 12 14 16 # Feedbacks Figure 2: Analysis for the models over the DBS (left) and DUC’04 (right) datasets on DBS. For DUC’04, the improvements are +.1 ROUGE-2 after ten iterations, which is relatively notable considering the lower upper bound of .21 ROUGE-2. This is primarily because DBS is a corpus of cohesive extracts, whereas DUC’04 consists of abstractive summaries. As a result, the oracles created using abstractive reference summaries have lower overlap of concepts as compared to that of the oracles created using extractive summaries. For DBS, it becomes clear that the JOINT model converges faster with an optimum amount of feedback as compared to other models. ACCEPT takes relatively more feedbacks than JOINT, but performs low in terms of ROUGE scores. The best performing models are AL and AL+, which reach closest to the upper bound. This is clearly due to the exploratory nature of the models which use semantic representations of the concepts to predict uncertainty and importance of possible concepts for user feedback. For DUC’04, the JOINT model reaches the closest to the upper bound, closely followed by AL. The JOINT model consistently stays above all other models and it gathers more important concepts due to optimizing feedbacks for concepts which lack feedback. Interestingly, AL+ performs rather worse in terms of both ROUGE scores and gathering important concepts. The primary reason for this is the fewer feedback collected from the simulation due to the abstractive property of reference summaries, which makes the AL+ model’s prediction inconsistent. 5.4 Personalization Figure 3 shows the performance of different models in comparison to two different oracles for the same document cluster. For DBS, the JOINT, AL, and AL+ models consistently converge to the upper bound in 4 iterations for different oracles, whereas ACCEPT takes longer for one oracle and does not reach the upper bound for the other. For DUC’04, JOINT and AL show consistent performance across the oracles, whereas AL+ performs worse than the state-of-the-art system (iteration 0) for oracle created using abstractive summaries as shown in Figure 3 (right) for User:1. 1360 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 ROUGE 2 User:1 0 1 2 3 4 5 6 7 # Iterations 0.2 0.3 0.4 0.5 0.6 0.7 0.8 ROUGE 2 User:2 Upper bound Active+ Active Joint Accept 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 ROUGE 2 User:1 0 1 2 3 4 5 6 7 # Iterations 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 ROUGE 2 User:2 Upper bound Active+ Active Joint Accept Figure 3: Analysis of models over cluster 7 from DBS (left) and cluster d30051t from DUC’04 (right) respectively for different oracles However, for User:2, we observe a ROUGE-2 improvement of +.1 indicating that the predictions of the active learning system are better if there is more feedback. Nevertheless, we expect that in practical use, the human summarizers may give more feedback similar to DBS in comparison to DUC’04 simulation setting. 6 Conclusion and Future Work We propose a novel ILP-based approach using interactive user feedback to create multi-document user-desired summaries. In this paper, we investigate pool-based active learning and joint optimization techniques to collect user feedback for identifying important concepts for a summary. Our models show that interactively collecting feedback consistently steers a general summary towards a user-desired personalized summary. We empirically checked the validity of our approach on standard datasets using simulated user feedback and observed that our framework shows promising results in terms of producing personalized multidocument summaries. As future work, we plan to investigate more sophisticated sampling strategies based on active learning and concept graphs to incorporate lexicalsemantic information for concept selection. We also plan to look into ways to propagate feedback to similar and related concepts with partial feedback, to reduce the total amount of feedback. This is a promising direction as we have shown that interactive methods help to create user-desired personalized summaries, and with minimum amount of feedbacks, it has propitious use in scenarios where user-adapted content is a requirement. Acknowledgments This work has been supported by the German Research Foundation as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) under grant No. GRK 1994/1. We also acknowledge the useful comments and suggestions of the anonymous reviewers. 1361 References Darina Benikova, Margot Mieskes, Christian M. Meyer, and Iryna Gurevych. 2016. Bridging the gap between extractive and abstractive summaries: Creation and evaluation of coherent extracts from heterogeneous sources. In Proceedings of the 26th International Conference on Computational Linguistics (COLING). Osaka, Japan, pages 1039–1050. http://aclweb.org/anthology/C16-1099. Taylor Berg-Kirkpatrick, Dan Gillick, and Dan Klein. 2011. Jointly learning to extract and compress. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL/HLT). Portland, OR, USA, pages 481–490. http://aclweb.org/anthology/P11-1049. Shlomo Berkovsky, Timothy Baldwin, and Ingrid Zukerman. 2008. Aspect-based personalized text summarization. In Adaptive Hypermedia and Adaptive Web-Based Systems. Proceedings of the 5th International Conference, Springer, Berlin/Heidelberg, volume 5149 of Lecture Notes in Computer Science, pages 267–270. https://doi.org/10.1007/978-3-54070987-9 31. Florian Boudin, Hugo Mougard, and Benoit Favre. 2015. Concept-based summarization using integer linear programming: From concept pruning to multiple optimal solutions. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMLNP). Lisbon, Portugal, pages 1914–1918. http://aclweb.org/anthology/D15-1220. Ziqiang Cao, Chengyao Chen, Wenjie Li, Sujian Li, Furu Wei, and Ming Zhou. 2016. TGSum: Build Tweet Guided Multi-Document Summarization Dataset. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI). Phoenix, AZ, USA, pages 2906–2912. http://www.aaai.org/ocs/index.php/AAAI/AAAI16. T. C. Craven. 2000. Abstracts produced using computer assistance. Journal of the American Society for Information Science 51(8):745–756. https://doi.org/10.1002/(SICI)1097-4571(2000)51:8 <745::AID-ASI70>3.0.CO;2-Z. Alberto D´ıaz and Pablo Gerv´as. 2007. Usermodel based personalized summarization. Information Process Management 43(6):1715–1734. https://doi.org/10.1016/j.ipm.2007.01.009. G¨unes Erkan and Dragomir R. Radev. 2004. LexRank: Graph-based Lexical Centrality As Salience in Text Summarization. Journal of Artificial Intelligence Research 22(1):457–479. https://www.jair.org/papers/paper1523.html. Dan Gillick and Benoit Favre. 2009. A scalable global model for summarization. In Proceedings of the Workshop on Integer Linear Programming for Natural Langauge Processing. Boulder, CO, USA, pages 10–18. http://aclweb.org/anthology/W09-1802. Yihong Gong and Xin Liu. 2001. Generic text summarization using relevance measure and latent semantic analysis. In Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR). New Orleans, LA, USA, pages 19–25. https://doi.org/10.1145/383952.383955. Jes´us Gonz´alez-Rubio, Daniel Ortiz-Mart´ınez, and Francisco Casacuberta. 2012. Active learning for interactive machine translation. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics (EACL). Avignon, France, pages 245–254. http://aclweb.org/anthology/E12-1025. Aria Haghighi and Lucy Vanderwende. 2009. Exploring content models for multi-document summarization. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). Boulder, CO, USA, pages 362–370. http://aclweb.org/anthology/N091041. Kai Hong, John M. Conroy, Benoˆıt Favre, Alex Kulesza, Hui Lin, and Ani Nenkova. 2014. A repository of state of the art and competitive baseline summaries for generic news summarization. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC). Reykjavik, Iceland, pages 1608–1616. http://www.lrecconf.org/proceedings/lrec2014/summaries/1093.html. Po Hu, Donghong Ji, Chong Teng, and Yujing Guo. 2012. Context-enhanced personalized social summarization. In Proceedings of the 24th International Conference on Computational Linguistics (COLING). Mumbai, India, pages 1223–1238. http://www.aclweb.org/anthology/C12-1075. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics (ACL). Sapporo, Japan, pages 423–430. https://doi.org/10.3115/1075096.1075150. Rebecca Knowles and Philipp Koehn. 2016. Neural interactive translation prediction. In Proceedings of the Conference of the Association for Machine Translation in the Americas (AMTA). Jan Kremer, Kim Steenstrup Pedersen, and Christian Igel. 2014. Active learning with support vector machines. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 4(4):313–326. https://doi.org/10.1002/widm.1132. Chen Li, Xian Qian, and Yang Liu. 2013. Using supervised bigram-based ILP for extractive summarization. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Sofia, Bulgaria, pages 1004– 1013. http://aclweb.org/anthology/P13-1099. 1362 Chin-Yew Lin. 2004. ROUGE: A Package for Automatic Evaluation of Summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 Workshop. Barcelona, Spain, pages 74–81. http://aclweb.org/anthology/W04-1013. Edward Loper and Steven Bird. 2002. NLTK: The Natural Language Toolkit. In Proceedings of the ACL-02 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics. pages 63–70. https://doi.org/10.3115/1118108.1118117. H. P. Luhn. 1958. The automatic creation of literature abstracts. IBM Journal of Research and Development 2(2):159–165. https://doi.org/10.1147/rd.22.0159. Ryan McDonald. 2007. A study of global inference algorithms in multi-document summarization. In Advances in Information Retrieval. Proceedings of the 29th European Conference on IR Research (ECIR), Springer, Berlin/Heidelberg, volume 4425 of Lecture Notes in Computer Science, pages 557–564. https://doi.org/10.1007/978-3-540-71496-5 51. Rada Mihalcea and Paul Tarau. 2004. TextRank: Bringing Order into Text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP). Barcelona, Spain, pages 404–411. http://aclweb.org/anthology/W043252. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. CoRR abs/1301.3781. http://arxiv.org/abs/1301.3781. Masumi Narita, Kazuya Kurokawa, and Takehito Utsuro. 2002. A Web-based English Abstract Writing Tool Using a Tagged E–J Parallel Corpus. In Proceedings of the Third International Conference on Language Resources and Evaluation (LREC). Las Palmas, Spain. http://www.lrecconf.org/proceedings/lrec2002/sumarios/137.htm. Constantin Orˇasan and Laura Hasler. 2006. Computeraided Summarisation: What the User Really Wants. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC). Genoa, Italy, pages 1548–1551. http://www.lrecconf.org/proceedings/lrec2006/summaries/52.html. Constantin Orˇasan, Ruslan Mitkov, and Laura Hasler. 2003. CAST: a computer-aided summarisation tool. In Proceedings of the tenth conference on European chapter of the Association for Computational Linguistics (EACL). Budapest, Hungary, pages 135– 138. http://aclweb.org/anthology/E03-1066. Karolina Owczarzak, John M. Conroy, Hoa Trang Dang, and Ani Nenkova. 2012. An assessment of the accuracy of automatic evaluation in summarization. In Proceedings of Workshop on Evaluation Metrics and System Comparison for Automatic Summarization. Montr´eal, Canada, pages 1–9. http://aclweb.org/anthology/W12-2601. Sun Park and Dong Un An. 2010. Automatic Query-based Personalized Summarization That Uses Pseudo Relevance Feedback with NMF. In Proceedings of the 4th International Conference on Ubiquitous Information Management and Communication (ICUIMC). pages 61:1–61:7. https://doi.org/10.1145/2108616.2108690. Helen Petrie and Nigel Bevan. 2009. The evaluation of accessibility, usability, and user experience. In Constantine Stephanidis, editor, The Universal Access Handbook, Boca Raton: CRC Press, Human Factors and Ergonomics, chapter 20, pages 1–16. https://doi.org/10.1201/9781420064995-c20. John C. Platt. 1999. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. In Advances In Large Margin Classifiers. MIT Press, pages 61–74. Micka¨el Poussevin, Vincent Guigue, and Patrick Gallinari. 2015. Extended recommendation framework: Generating the text of a user review as a personalized summary. In Proceedings of the 2nd Workshop on New Trends on Content-Based Recommender Systems co-located with 9th ACM Conference on Recommender Systems (RecSys 2015), Vienna, Austria, September 16-20, 2015. pages 34–41. http://ceurws.org/Vol-1448/paper7.pdf. Nils Reimers, Judith Eckle-Kohler, Carsten Schnober, Jungi Kim, and Iryna Gurevych. 2014. GermEval2014: Nested Named Entity Recognition with Neural Networks. In Workshop Proceedings of the 12th Edition of the KONVENS Conference. Hildesheim, Germany, pages 117–120. Kristian Woodsend and Mirella Lapata. 2012. Multiple aspect summarization using integer linear programming. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP/CoNLL). Jeju Island, Korea, pages 233–243. http://aclweb.org/anthology/D121022. Klaus Zechner. 2002. Automatic summarization of open-domain multiparty dialogues in diverse genres. Journal of Computational Linguistics 28(4):447– 485. https://doi.org/10.1162/089120102762671945. Haiqin Zhang, Zheng Chen Wei-ying Ma, and Qingsheng Cai. 2003. A study for documents summarization based on personal annotation. In Proceedings of the HLT-NAACL 03 on Text Summarization Workshop. pages 41–48. https://doi.org/10.3115/1119467.1119473. 1363
2017
124
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1364–1373 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1125 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1364–1373 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1125 Flexible and Creative Chinese Poetry Generation Using Neural Memory Jiyuan Zhang1,2, Yang Feng3,1,4, Dong Wang1∗, Yang Wang1, Andrew Abel6, Shiyue Zhang1,5, Andi Zhang1,5 1Center for Speech and Language Technologies(CSLT), RIIT, Tsinghua University, China 2Shool of Software & Microelectronics, Peking University, China 3Key Laboratory of Intelligent Information Processing,Institute of Computing Technology,CAS 4Huilan Limited, Beijing, China 5Beijing University of Posts and Telecommunications, China 6Xi’an Jiaotong-Liverpool University, China zhangjy [email protected], [email protected] Abstract It has been shown that Chinese poems can be successfully generated by sequence-to-sequence neural models, particularly with the attention mechanism. A potential problem of this approach, however, is that neural models can only learn abstract rules, while poem generation is a highly creative process that involves not only rules but also innovations for which pure statistical models are not appropriate in principle. This work proposes a memory-augmented neural model for Chinese poem generation, where the neural model and the augmented memory work together to balance the requirements of linguistic accordance and aesthetic innovation, leading to innovative generations that are still rule-compliant. In addition, it is found that the memory mechanism provides interesting flexibility that can be used to generate poems with different styles. 1 Introduction Classical Chinese poetry is a special cultural heritage with over 2,000 years of history and is still fascinating us today. Among the various genres, perhaps the most popular one is the quatrain, a special style with a strict structure (four lines with five or seven characters per line), a regulated rhythmical form (the last characters in the second and fourth lines must follow the same rhythm), and a required tonal pattern (tones of characters in some positions should satisfy a predefined regulation) (Wang, 2002). This genre flourished mostly in the Tang Dynasty, and so are often called 1Corresponding author: Dong Wang; RM 1-303, FIT BLDG, Tsinghua University, Beijing (100084), P.R. China. ‘Tang poems’. An example of a quatrain written by Wei Wang, a famous poet in the Tang Dynasty, is shown in Table 1. Due to the stringent restrictions in both rhythm and tone, it is not trivial to create a fully rule-compliant quatrain. More importantly, besides such strict regulations, a good quatrain should also read fluently, hold a consistent theme, and express a unique affection. Therefore, poem generation is widely recognized as a very intelligent activity and can be performed only by knowledgeable people with a lot of training. Wi Climbing the Paradise Mound •¿Ø·§ (* Z Z P Z) As I was not in a good mood this evening round, °•"(P P P Z P) I went by cart to climb the Ancient Paradise Mound. IÕЧ (* P P Z Z) It is now nearing dusk, •´C‘³"(* Z Z P P) When the setting sun is infinitely fine, which is a must. Table 1: An example of a 5-char quatrain. The tonal pattern is shown at the end of each line, where ’P’ indicates a level tone, ’Z’ indicates a downward tone, and ’*’ indicates the tone can be either. The translation is from (Tang, 2005). In this paper we are interested in machine poetry generation. Several approaches have been studied by researchers. For example, rule-based methods (Zhou et al., 2010), statistical machine translation (SMT) models (Jiang and Zhou, 2008; He et al., 2012) and neural models (Zhang and Lapata, 2014; Wang et al., 2016a,c). Compared to previ1364 ous approaches (e.g., rule-based or SMT), the neural model approach tends to generate more fluent poems and some generations are so natural that even professional poets can not tell they are the work of machines (Wang et al., 2016a). In spite of these promising results, neural models suffer from a particular problem in poem generation, a lack of innovation. Due to the statistical nature of neural models, they pay much more attention to high-frequency patterns, whereas they ignore low-frequency ones. In other words, the more regular and common the patterns, the better the neural model is good at learning them and tends to use them more frequently at run-time. This property certainly helps to generate fluent sentences, but it is not always useful: the major value of poetry is not fluency, but the aesthetic innovation that can stimulate some unique feelings. This is particularly true for Chinese quatrains that are highly compact and expressive: it is nearly impossible to find two similar works in the thousands of years of history in this genre, demonstrating the importance of uniqueness or innovation. Ironically, the most important thing, innovation, is largely treated as trivial, if not noise, by present neural models. Actually this problem is shared by all generation models based on statistics (although it is more serious for neural models) and has aroused a long-standing criticism for machine poem generation: it can generate, and sometimes generate well, but the generation tends to be unsurprising and not particularly interesting. More seriously, this problem exists not only in poem generation, but also in all generation tasks that require innovation. This paper tries to solve this extremely challenging problem. We argue that the essential problem is that statistical models are good at learning general rules (usage of regular words and their combinations) but are less capable of remembering special instances that are difficult to cover with general rules. In other words, there is only rule-based reasoning, no instance-based memory. We therefore present a memory-augmented neural model which involves a neural memory so that special instances can be saved and referred to at run-time. This is like a human poet who creates poems by not only referring to common rules and patterns, but also recalls poems that he has read before. It is hard to say whether this combination of rules and instances produces true innovation (which often requires real-life motivation rather than simple word reordering), but it indeed offers interesting flexibility to generate new outputs that look creative and are still rule-compliant. Moreover, this flexibility can be used in other ways, e.g., generating poems with different styles. In this paper, we use the memory-augmented neural model to generate flexible and creative Chinese poems. We investigate three scenarios where adding a memory may contribute: the first scenario involves a well trained neural model where we aim to promote innovation by adding a memory, the second scenario involves an over-fitted neural model where we hope the memory can regularize the innovation, and in the third scenario, the memory is used to encourage generation of poems of different styles. 2 Related Work A multitude of methods have been proposed for automatic poem generation. The first approach is based on rules and/or templates. For example, phrase search (Tosa et al., 2009; Wu et al., 2009), word association norm (Netzer et al., 2009), template search (Oliveira, 2012), genetic search (Zhou et al., 2010), text summarization (Yan et al., 2013). Another approach involves various SMT methods, e.g., (Jiang and Zhou, 2008; He et al., 2012). A disadvantage shared by the above methods is that they are based on the surface forms of words or characters, having no deep understanding of the meaning of a poem. More recently, neural models have been the subject of much attention. A clear advantage of the neural-based methods is that they can ‘discover’ the meaning of words or characters, and can therefore more deeply understand the meaning of a poem. Here we only review studies on Chinese poetry generation that are mostly related to our research. The first study we have found in this direction is the work by Zhang and Lapata (2014), which proposed an RNN-based approach that produces each new line character-by-character using a recurrent neural network (RNN), with all the lines generated already (in the form of a vector) as a contextual input. This model can generate quatrains of reasonable quality. Wang et al. (2016b) proposed a much simpler neural model that treats a poem as an entire character sequence, and poem generation is conducted character-by-character. This approach can be 1365 easily extended to various genres such as Song Iambics. To avoid theme drift caused by this long-sequence generation, Wang et al. (2016b) utilized the neural attention mechanism (Bahdanau et al., 2014) by which human intention is encoded by an RNN to guide the generation. The same model was used by Wang et al. (2016a) for Chinese quatrain generation. Yan (2016) proposed a hierarchical RNN model that conducts iterative generation. Recently, Wang et al. (2016c) proposed a similar sequence generation model, but with the difference that attention is placed not only on the human input, but also on all the characters that have been generated so far. They also proposed a topic planning scheme to encourage a smooth and consistent theme. All the neural models mentioned above try to generate fluent and meaningful poems, but none of them consider innovation. The memory-augmented neural model proposed in this study intends to address this issue. Our system was built following the model structure and training strategy proposed by Wang et al. (2016a) due to its simplicity and demonstrated quality, but the memory mechanism is general and can be applied to any of the models presented above. The idea of memory argumentation was inspired by the recent advance in neural Turing machine (Graves et al., 2014, 2016) and memory network (Weston et al., 2014). These new models equip neural networks with an external memory that can be accessed and manipulated via some trainable operations. In comparison, the memory in our work plays a simple role of knowledge storage, and the only operation is simple pre-defined READ. In this sense, our model can be regarded as a simplified neural Turing machine that omits training. 3 Memory-augmented neural model In this section, we first present the idea of memory augmentation, and then describe the model structure and training method. 3.1 Memory augmentation The idea of memory augmentation is illustrated in Fig. 1. It contains two components, the neural model component on the left, and the memory component on the right. In this work, the attention-based RNN generation model presented by (Wang et al., 2016a) is used as the neural modFigure 1: The memory-augmented neural model used for Chinese poetry generation. el component, although any neural model is suitable. The memory component involves a set of ‘direct’ mappings from input to output, and therefore can be used to memorize some special cases of the generation that can not be represented by the neural model. For poem generation, the memory stores the information regarding which character should be generated in a particular context. The output from the two components are then integrated, leading to a consolidated output. There are several ways to understand the memory-augmented neural model. Firstly, it can be regarded as a way of combining reasoning (neural model) and knowledge (memory). Secondly, it can be regarded as a way of combining rule-based inference (neural model) and instance-based retrieval (memory). Thirdly, it can be regarded as a way of combining predictions from complementary systems, where the neural model is continuous and parameter-shared, while the memory is discrete and contains no parameter sharing. Finally, the memory can be regarded as an effective regularization that constrains and modifies the behavior of the neural model, resulting in generations with desired properties. Note that this memory-augmented neural model is inspired by and related to the memory network proposed by Weston et al.(2014) and Graves et al.(2016), but we more focus on an accompanying memory that plays the role of assistance and regularization. 3.2 Model structure Using the Chinese poetry generation model shown in Fig. 1 as an example, this section discusses the creation of a memory-augmented neural model. Firstly, the neural model part is an attention-based sequence-to-sequence model (Bahdanau et al., 2014). The encoder is a bi1366 directional RNN (with GRU units) that converts the input topic words, denoted by the embeddings of the compositional characters (x1, x2, ..., xN), into a sequence of hidden states (h1, h2, ..., hN). The decoder then generates the whole quatrain character-by-character, denoted by the corresponding embeddings (y1, y2, ...). At each step t, the prediction for the state st is based on the last generation yt−1, the previous status st−1 of the decoder, as well as all the hidden states (h1, h2, ...) of the encoder. Each hidden state hi contributes to the generation according to a relevance factor αt that measures the similarity between st−1 and hi. This is written as: st = fd(yt−1, st−1, N X i=1 αt,ihi) where αt,i represents the contribution of hi to the present generation, and can be implemented as any function. The output of the model is a posterior probability over the whole set of characters, written by zt = σ(stW) where W is the projection parameter. The memory consists of a set of elements {mi}K i=1, where K is the size of the memory. Each element mi involves two parts, the source part mi(s), that encodes the context, i.e. when this element should be selected, and the target part mi(g), that encodes what should be output if this element is selected. In our study, the neural model is firstly trained, and then the memory is created by running fd (the decoder of the neural model). Specifically, for the k-th poem selected to be in the memory, the character sequence is input to the decoder one by one, with the contribution from the encoder set to zero. Denoting the starting position of this poem in the memory is pk, the status of the decoder at the j-th step is used as the source part of the (pk + j)-th element of the memory, and the embedding of the corresponding character, xj, is set to be the target part. this is formally written as: mi(s) = fd(xj−1, sj−1, 0) (1) and mi(g) = xj where i = pk + j. At run-time, the memory elements are selected according to their fit to the present decoder status st, and then the outputs of the selected elements are averaged as the output of the memory component. We choose cosine distance to measure the fitting degree, and have1: vt = K X i=1 cos(st, mi(s))mi(g). (2) The output of the neural model and the memory can be combined in various ways. Here, a simple linear combination before the softmax is used, i.e., zt = σ(stW + βvtE) (3) where β is a pre-defined weighting factor, and E contains word embeddings of all the characters. Although it is possible to train β from the data, we found that the learned β is not better than the manually-selected one. This is probably because β is a factor to trade-off the contribution from the model and the memory, and how to make the trade-off should be a ‘prior knowledge’ rather than a tunable parameter. In fact, if it is trained, than it will be immediately adapted to match the training data, which will nullify our effort to encourage innovative generation. 3.3 Model Training In our implementation, only the neural model component is required to be trained. The training algorithm follows the scheme defined in (Wang et al., 2016a), where the cross entropy between the distributions over Chinese characters given by the decoder and the ground truth is used as the objective function. The optimization uses the SGD algorithm together with AdaDelta to adjust the learning rate (Zeiler, 2012). 4 Memory augmentation for Chinese poetry generation This section describes how the memory mechanism can be used to trade-off between the requirements for rule-compliant generation and aesthetic innovation, and how it can also be used to do more interesting things, for example style transfer. 4.1 Memory for innovative generation In this section, we describe how the memory mechanism promotes innovation. Monitoring the 1In fact, we run a parallel decoder to provide st in Eq.(2). This decoder does not accept input from the encoder and so is consistent with the memory construction process as Eq.(1). 1367 training process for the attention-based model, we found that the cost on the training set will keep decreasing until approaching zero, but on the validation set, the degradation stops after only one iteration. This can be explained by the fact that Chinese quatrains are highly unique, so the common patterns can be fully learned in one iteration, resulting in overfitting with additional iterations. Due to the overfitting, we observe that with the one-iteration model, reasonable poems can be generated, and with the over-fitted model, the generated poems are meaningless, in that they do not resemble feasible character sequences. The energy model perspective helps to explain this difference. For the one-iteration model, the energy surface is smooth and the energy of the training data is not very low, as illustrated in plot (a) in Fig. 2, where the x-axis represents the input and y-axis represents the output, and the z-axis represents the energy. With this model, inputs with small variance will be attracted to the same low-energy area, leading to similar generations. These generations are trivial, but at least reasonable. If the model is overfitted, however, the energy at the locations of the training data becomes much lower than their surrounding areas, leading to a bumpy energy surface as shown in plot (b) in Fig. 2. With this model, inputs with a small variation may be attracted to very different low-energy areas, leading to significantly different generations. Since many of the low-energy areas are nothing to do with good generations but are simply caused by the complex energy function, the generations can be highly surprising for human readers, and the quality is not guaranteed. In some sense, these generations can be regarded as ‘innovative’ , but based on observations made in our experiments, most of them are meaningless. The augmented memory introduces a new energy function, which is combined with the energy function of the neural model to change the energy surface of the generation system. This can be seen in Eq. (3), where stW and βvtE can be regarded as the energy function of the neural model component and the memory component, respectively, and the energy function of the memory-augmented system is the sum of the energy functions of these two components. For this reason, the effect of the memory mechanism can be regarded as a regularization of the neural model that will adjust its generation behavior. This regularization effect is illustrated in Fig. 2, where the energy function of the memory shown in plot (c) is added to the energy function of the oneiteration model and the overfitted model, as shown in plot (e) and plot (f) respectively. It can be seen that with the memory involved, the energy surface becomes more bumpy with the one-iteration model, and more smooth with the overfitted model. In the former case, the effect of the memory is to encourage innovation, while still focusing on rule-compliance, and in the latter case, the effect is to encourage rule compliance, while keeping the capability for innovation. It is important to notice that the energy function of the memory component is a linear combination of the energy functions of the compositional elements (see Eq.(2)), each of which is convex and is minimized at the location represented by the element. This means that the energy surface of the memory is rather ‘healthy’, in the sense that low-energy locations mostly correspond to good generations. For this reason, the regularization provided by the memory is safe and helpful. 4.2 Memory for style transfer The effect of the memory is easy to control. For example, the complexity of the behavior can be controlled by the memory size, the featured bias can be controlled by memory selection, and the strength of the impact can be controlled by the weighting parameter β. This means that the memory mechanism is very flexible and can be used to produce poems with desired properties. In this work, we use these capabilities to generate poems with different styles. This has been illustrated in Fig. 2, where the energy function of the style memory shown in plot (d) is biased towards a particular style, and once it is added to energy function of the one-iteration model, the resulting energy function shown in plot (g) obtains lower values at locations corresponding to the locations of the memory, which encourages generation of poems with similar styles as those poems in the memory. 5 Experiments This section describes the experiments and results carried out in this paper. Here, The baseline system was a reproduction of the Attention-based system presented in (Wang et al., 2016a). the model in This system has been shown to be rather flexi1368 Figure 2: The energy surface for (a) one-iteration model (b) overfitted model (c) memory (d) style memory (e) one-iteration model augmented with memory (f) overfitted model augmented with memory (g) one-iteration model augmented with style memory. ble and powerful: it can generate different genres of Chinese poems, and when generating quatrains it has been shown to be able to fool human experts in many cases (Wang et al., 2016a) and the authors had did a thorough comparison with competitive methods mentioned in the related work of this paper. We obtained the database and the source code (in theano), and reproduced their system using Tensorflow from Google2. We didn’t make comparisons with some previous methods such as NNLM, SMT, RNNPG as they had been fully compared in (Wang et al., 2016a) and all of them were much worse than the attention-based system. Another reason was that the experts were not happy to evaluate poems with clearly bad quality. We also reproduced the model in (Wang et al., 2016c) with the help of the first author. However, since their implementation did not involve any restrictions on rhythm and tone, the experts were reluctant to recognize them as good poems. With a larger dataset (e.g., 1 Million poems), it is assumed that the rhythm and tone can be learned and their system would be good in both fluency and rule compliance. It should be also emphasized that the memory approach proposed in this paper is a general technique and is complementary to other efforts such as the planning approach (Wang et al., 2016c) and the recursive approach (Yan, 2016). Based on the baseline system, we built the memory-augmented model, and conducted two 2https://www.tensorflow.org/ experiments to demonstrate its power. The first is an innovation experiment which employs memory to promote or regularize the generation of innovative poems, and the second is a style-transfer experiment which employs memory to generate flexible poems in different styles. We invited 34 experts to participate in the experiments, and all of them have rich experience not only evaluating poems, but also in writing them. Most of the experts are from prestigious institutes, including Peking university and the Chinese Academy of Social Science (CASS). Following the suggestions of the experts, we use five metrics to evaluate the generation, as listed below: • Compliance: if regulations on tones and rhymes are satisfied; • Fluency: if the sentences read fluently and convey reasonable meaning; • Theme consistency: if the entire poem adheres to a single theme; • Aesthetic innovation: if the quatrain stimulates any aesthetic feeling with elaborate innovation; • Scenario consistency: if the scenario remains consistent. 5.1 Datasets The baseline system was built with two customized datasets. The first dataset is a Chinese po1369 em corpus (CPC), which we used in this work to train the embeddings of Chinese characters. Our CPC dataset contains 284,899 traditional Chinese poems in various genres, including Tang quatrains, Song Iambics, Yuan Songs, and Ming and Qing poems. This large quantity of data ensures reliable learning for the semantic content of most Chinese characters. Our second dataset is a Chinese quatrain corpus (CQC) that we have collected from the internet, which consists of 13, 299 5-char quatrains and 65, 560 7-char quatrains. This corpus was used to train the attention-based RNN baseline. We filtered out the poems whose characters are all low-frequency (less than 100 counts in the database). After the filtering, the remaining corpus contains 9,195 5-char quatrains and 49,162 7-char quatrains. We used 9,000 5-char and 49,000 7-char quatrains to train the attention model, and the rest for validation. Another two datasets were created for use in the memory-augmented system. Our first dataset, MEM-I, contains 500 quatrains randomly selected from our CQC corpus. This dataset was used to produce the memory in the innovation experiment; the second dataset, MEM-S, contains 300 quatrains with clear styles, including 100 pastoral, 100 battlefield and 100 romantic quatrains. It was used to generate memory with different styles in the style-transfer experiment. All the datasets will be released online3. 5.2 Evaluation Process We invited 34 experts to evaluate the quality of the poem generation. In the innovation experiment, the evaluation consisted of a comparison between different systems and configurations in terms of the five metrics. The innovation questions presented the expert with two poems, and asked them to judge which of the poems was better in terms of the five metrics; in the style-transfer experiment, the evaluation was performed by identifying the style of a generated poem. The evaluation was conducted online, with each questionnaire containing 11 questions focusing on innovation and 4 questions concerned with style-transfer. Each of the style-transfer questions presented the expert with a single poem and asked them to score it between 1 to 5, with a larger score being better, in terms of compliance, aesthetic innovation, 3http://vivi.cslt.org scenario consistency, and fluency. They were also asked to specify the style of the poem. Using the poems generated by our systems, we generated many different questions of both types, and then created a number of online questionnaires that randomly selected from these questions. This meant that as discussed above, each questionnaire had 11 randomly selected innovation questions, and 4 randomly selected style transfer questions. Each question was only used once, meaning that it was not duplicated on multiple questionnaires, and so each questionnaire was different. Experts could choose to answer multiple questionnaires if they wished, as each one was different. From the 34 experts, we collected 69 completed questionnaires, which equals to 759 innovation questions and 276 style-transfer questions. 5.3 Innovation experiment This experiment focuses on the contribution of memory for innovative poem generation. We experimented with two configurations: one is with a one-iteration model (C1) and the other is with an overfitted model (C∞). The memory was generated from the 500 quatrains in MEM-I, and the weighting factor was defined empirically as 16 for C1 and 49 for C∞. The topics of the generation were 160 keywords randomly selected from Shixuhanyinge (Liu, 1735). Given a pair of poems generated by two different configurations using the same topic, the experts were asked to choose which one they preferred. The evaluation is therefore pair-wised, and each pair of configurations contains at least 180 evaluations. The results are shown in Table 2, where the preference ratio for each pair of configurations was tested in terms of the 5 metrics. From the first row of Table 2, we observe that the experts have a clear preference for the poems generated by the C1 model, the one that can produce fluent yet uninteresting poems. In particular, the ‘aesthetic innovation’ score for C∞is not better than C1, which was different from what we expected. Informal offline discussions with the poetry experts found that the experts identified some innovative expression in the C∞condition, but most of the them was regarded as being nonsense in the opinion of many of the experts. In comparison to sparking innovation, fluency and being meaningful is more important not only for non-expert readers, but also for professional poet1370 Preference Ratio Compliance Fluency Theme Aesthetic Scenario Consistency Innovation Consistency C1 vs C∞ 0.59:0.41 0.68:0.32 0.70:0.30 0.68:0.32 0.69:0.31 C1 vs C1+Mem 0.41:0.59 0.36:0.64 0.37:0.63 0.33:0.67 0.43:0.57 C∞vs C∞+Mem 0.40:0.60 0.26:0.74 0.32:0.68 0.30:0.70 0.36:0.64 C1 vs C∞+Mem 0.43:0.57 0.58:0.42 0.59:0.41 0.50:0.50 0.59:0.41 Table 2: Preference ratios for systems with or without overfitting and with or without memory augmentation. s. In other words, only meaningful innovation is regarded as innovation, and irrational innovation is simply treated as junk. From the second and third rows of Table 2, it can be seen that involving memory significantly improves both C1 and C∞, particularly for C∞. For C1, the most substantial improvement is observed in terms of ‘Aesthetic innovation’, which is consistent with our argument that memory can help encourage innovation for this model. For C∞, ‘Fluency’ seems to be the most improved metric. This is also consistent with our argument that involving memory constrains over-innovation for over-fitted models. The last row of Table 2 is an extra experiment that investigates if C∞is regularized well enough after introducing the memory. It seems that with the regularization, the overfitting problem is largely solved, and the generation is nearly as fluent and consistent as the C1 condition. Interestingly, the score for aesthetic innovation is also significantly improved. Since the regularization is not supposed to boost innovation, this seems confusing at first glance (in comparison to the result on the same metric in the first row), but this is probably because the increased fluency and consistency makes the innovation more appreciated, therefore doubly confirming our argument that true innovation should be reasonable and meaningful. 5.4 Style-transfer experiment In the second experiment, the memory mechanism is used to generate poems in different styles. We chose three styles: pastoral, battlefield, and romantic. A style-specific memory, which we call style memory, was constructed for each style by the corresponding quatrains in the MEM-S dataset. The system with one-iteration model C1 was used as the baseline. Two sets of topics were used in the experiment, one is general and the other is style-biased. The experiments then investigate if the memory mechanism can produce a clear style if the topic is general, and can transfer to a different style if the topic is style-biased already. The experts were asked to specify the style from four options including the three defined above and a ‘unclear style’ option. In addition, the experts were asked to score the poems in terms of compliance, fluency, aesthetic innovation, and scenario consistency, which we can use to check if the style transfer impacts the quality of the poem generation. Note that we did not ask for the theme consistency to be scored in this experiment because the topic words were not presented to the experts, in order to prevent the topic affecting their judgment regarding the style. The score ranges from 1 to 5, with a larger score being better. Table 3 presents the results with the general topics. The numbers show the probabilities that the poems generated by a particular system were labeled as having various styles. Since the topics are unbiased in types, the generation of the baseline system is assumed to be with unclear styles. For other systems, the style of the generation is assumed to be the same as the style of their memories. The results in Table 3 clearly demonstrates these assumptions. The tendency that romantic poems are recognized as pastoral poems is a little surprising. Further analysis shows that experts tend to recognize romantic poems as pastoral poems only if there are any related symbols such as trees, mountain, river. These words are very general in Chinese quatrains. The indicator words of romantic poems such as skirt, rouge, and singing are not as popular and their indication power is not as strong, leading to less labeling of romantic poems, as shown in the results. Probability Model Pastoral Battlefield Romantic Unclear C1 (Baseline) 0.09 0.04 0.18 0.69 C1 + Pastoral Mem 0.94 0.00 0.06 0.00 C1 + Battlefield Mem 0.05 0.93 0.00 0.02 C1 + Romantic Mem 0.17 0.00 0.61 0.22 Table 3: Probability that poems generated by each configuration with general topics are labeled as various styles. We also tested transferring from one style to 1371 another. This was achieved by generating poems with some style-biased topics, and then using a style memory to force the generation to change the style. Our experiments show that in 73% cases the style can be successfully transferred. Finally, the scores of the poems generated with and without the style memories are shown in Table 4, where the poems generated with both general and style-biased topics are accounted for. It can be seen that overall, the style transfer may degrade fluency a little. This is understandable, as enforcing a particular style has to break the optimal generation with the baseline, which is assumed to be good at generating fluent poems. Nevertheless the sacrifice is not significant. Method Compliance Fluency Aesthetic Scenario Innovation Consistence C1 (baseline) 4.10 3.01 2.53 2.94 C1 + Pastoral Mem 4.07 3.00 3.07 3.17 C1 + Battlefield Mem 3.82 2.63 2.60 2.95 C1 + Romantic Mem 4.00 2.78 2.59 3.00 C1 + All Mem 3.95 2.80 2.74 3.05 Table 4: Averaged scores for systems with or without style memory. 5.5 Examples Table 5 to Table 7 shows example poems generated by the system C1, C1+Mem and C1+Style Mem where the style in this case is set to be romantic. The three poems were generated with the same, very general, topic (‘g(oneself)’). More examples are given in the supporting material. gld¿Ã%Ô§ Nothing in my heart, ˜FÀºØŒî" Spring wind is not a pity. #<mÛ¤3§ Don’t ask where it is, ·8®k½ƒD" I’ve noticed that and tell others. Table 5: Example poems generated by the C1 system. 6 Conclusions In this paper, we proposed a memory mechanism to support innovative Chinese poem generation by neural models augmented with a memory. Experimental results demonstrated that memory can boost innovation from two opposite di˜ìgkÃ<Ч Nobody speaking in the mountain, Ø´“\Y>" Also no green cloud stepping into the river. #rSºNá“§ Spring wind does not stir leaves, smÉä÷ôE" But flowers blooming in trees and flying to boats. Table 6: Example poems generated by the C1+Mem system. s†®òD•/§ Beautiful face addressed by rouge, ðK ÉTD" Mandarin duck outside the curtain. }EùfSÚe§ Green sleeves and red flowers in cold spring, 7ñ“Vë•" Willow leaves gone in fragrant mist. Table 7: Example poems generated by the C1+Style Mem system where the style is romantic. rections: either by encouraging creative generation for regularly-trained models, or by encouraging rule-compliance for overfitted models. Both strategies work well, although the former generated poetry that was preferred by experts in our experiments. Furthermore, we found that the memory can be used to modify the style of the generated poems in a flexible way. The experts we collaborated with feel that the present generation is comparable to today’s experienced amateur poets. Future work involves investigating a better memory selection scheme. Other regularization methods (e.g., norm or drop out) are also interesting and may alleviate the over-fitting problem. Acknowledgments This paper was supported by the National Natural Science Foundation of China (NSFC) under the project NO.61371136, NO.61633013, NO.61472428. 1372 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 . Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. arXiv preprint arXiv:1410.5401 . Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka GrabskaBarwi´nska, Sergio G´omez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. 2016. Hybrid computing using a neural network with dynamic external memory. Nature 538(7626):471–476. Jing He, Ming Zhou, and Long Jiang. 2012. Generating Chinese classical poems with statistical machine translation models. In Twenty-Sixth AAAI Conference on Artificial Intelligence. Long Jiang and Ming Zhou. 2008. Generating Chinese couplets using a statistical mt approach. In Proceedings of the 22nd International Conference on Computational Linguistics. Association for Computational Linguistics, volume 1, pages 377–384. Wenwei Liu. 1735. ShiXueHanYing. Yael Netzer, David Gabay, Yoav Goldberg, and Michael Elhadad. 2009. Gaiku: Generating haiku with word associations norms. In Proceedings of the Workshop on Computational Approaches to Linguistic Creativity. Association for Computational Linguistics, pages 32–39. H Oliveira. 2012. Poetryme: a versatile platform for poetry generation. In Proceedings of the ECAI 2012 Workshop on Computational Creativity, Concept Invention, and General Intelligence. Yihe Tang. 2005. English Translation for Tang Poems (Ying Yi Tang Shi San Bai Shou). Tianjin People Publisher. Naoko Tosa, Hideto Obara, and Michihiko Minoh. 2009. Hitch haiku: An interactive supporting system for composing haiku poem. Entertainment Computing-ICEC 2008 pages 209–216. Springer. Li Wang. 2002. A Summary of Rhyming Constraints of Chinese Poems (Shi Ci Ge Lv Gai Yao), volume 1. Beijin Press. Qixin Wang, Tianyi Luo, and Dong Wang. 2016a. Can machine generate traditional Chinese poetry? a feigenbaum test. In BICS 2016. Qixin Wang, Tianyi Luo, Dong Wang, and Chao Xing. 2016b. Chinese song iambics generation with neural attention-based model. In IJCAI 16. Zhe Wang, Wei He, Hua Wu, Haiyang Wu, Wei Li, Haifeng Wang, and Enhong Chen. 2016c. Chinese poetry generation with planning based neural network. In COLING 2016. Jason Weston, Sumit Chopra, and Antoine Bordes. 2014. Memory networks. arXiv preprint arXiv:1410.3916 . Xiaofeng Wu, Naoko Tosa, and Ryohei Nakatsu. 2009. New hitch haiku: An interactive renku poem composition supporting tool applied for sightseeing navigation system. Entertainment Computing-ICEC 2009 pages 191–196. Springer. Rui Yan. 2016. i, Poet: Automatic poetry composition through recurrent neural networks with iterative polishing schema. In IJCAI2016. Rui Yan, Han Jiang, Mirella Lapata, Shou-De Lin, Xueqiang Lv, and Xiaoming Li. 2013. i, Poet: automatic Chinese poetry composition through a generative summarization framework under constrained optimization. In Proceedings of the Twenty-Third international joint conference on Artificial Intelligence. AAAI Press, pages 2197– 2203. Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 . Xingxing Zhang and Mirella Lapata. 2014. Chinese poetry generation with recurrent neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). pages 670–680. Cheng-Le Zhou, Wei You, and Xiaojun Ding. 2010. Genetic algorithm and its implementation of automatic generation of Chinese Songci. Journal of Software 21(3):427–437. 1373
2017
125
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1374–1384 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1126 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1374–1384 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1126 Learning to Generate Market Comments from Stock Prices Soichiro Murakami†,∗Akihiko Watanabe †,∗Akira Miyazawa ‡,¶,∗Keiichi Goshima †,∗ Toshihiko Yanase § Hiroya Takamura †,∗ Yusuke Miyao ‡,¶,∗ † Tokyo Institute of Technology ‡ The Graduate University for Advanced Studies § Hitachi, Ltd. ¶ National Institute of Informatics ∗National Institute of Advanced Industrial Science and Technology {murakami,watanabe}@lr.pi.titech.ac.jp, [email protected], [email protected] [email protected], {miyazawa-a,yusuke}@nii.ac.jp Abstract This paper presents a novel encoderdecoder model for automatically generating market comments from stock prices. The model first encodes both short- and long-term series of stock prices so that it can mention short- and long-term changes in stock prices. In the decoding phase, our model can also generate a numerical value by selecting an appropriate arithmetic operation such as subtraction or rounding, and applying it to the input stock prices. Empirical experiments show that our best model generates market comments at the fluency and the informativeness approaching human-generated reference texts. 1 Introduction Various industries such as finance, pharmaceuticals, and telecommunications have been increasingly providing opportunities to treat various types of large-scale numerical time-series data. Such data are hard for non-specialists to interpret in detail and time-consuming even for specialists to construe. As a result, there has been a growing interest in automatically generating concise descriptions of such data, i.e., data summarization. This interest in data summarization is encouraged by the recent development of neural network-based text generation methods. Given an appropriate architecture, a neural network can generate a sentence that is mostly grammatical and semantically reasonable. In this study, we focus on the task of generating market comments from a time-series of stock prices. We adopt an encoder-decoder model (Sutskever et al., 2014) and exploit its capability to learn to capture the behavior of the input and generate a description of it. Although encoderdecoder models can learn to do this, they need to be (1) (2) (3) (4) (5) (6) Previous Day (Afternoon Session) Morning Session Afternoon Session 19200 19300 19400 19500 19600 14:00 15:00 9:00 10:00 11:00 12:00 13:00 14:00 15:00 Time Stock price [yen] Time Comment (1) 09:00 Nikkei opens with a continual fall. (2) 09:29 Nikkei turns to rise. (3) 11:30 Nikkei continues to fall. The closing price of the morning session decreases by 5 yen to 19,386 yen. (4) 12:30 Nikkei rises at the beginning of the afternoon session. (5) 13:54 Nikkei gains more than 100 yen. (6) 15:00 Nikkei rebounds and closes up 102 yen to 19,494 yen. Figure 1: Nikkei 225 and market comments. provided with an appropriate network-architecture and necessary information. We use Figure 1 to illustrate the characteristic problems of comment generation for time-series of stock prices. The figure shows the Nikkei Stock Average (Nikkei 225, or simply Nikkei), which is a stock market index calculated from 225 selected issues, on some consecutive trading days accompanied by the market comments made at some specific time points in the span. The first problem is that market comments do not merely describe the increase and decrease of the price. They also often describe how the price changes compared with the previous period, such as “continues to fall” in (3) of Figure 1, “turns to rise” in (2), and “rebound” in (6). Market comments sometimes describe the change in price compared with the prices in the previous week. The second problem is that market comments also 1374 contain expressions that depend on their delivery time: e.g., “opens with” in (1), “closing price of the morning session” in (3), and “beginning of the afternoon session” in (4). The third problem is that market comments typically contain numerical values, which often cannot be copied from the input prices. Such numerical values probably cannot be generated as other words are generated by the standard decoder. This difficulty can be easily understood as analogous with the difficulty of generating named entities by encoder-decoder models. To derive such values, the model needs arithmetic operations such as subtraction as in examples (3) and (6) mentioning the difference in price and rounding as in example (5). To address these problems, we present a novel encoder-decoder model to automatically generate market comments from stock prices. To address the first problem of capturing various types of change in different time scales, the model first encodes data consisting of both short- and long-term time-series, where a multi-layer perceptron, a recurrent neural network, or a convolutional network is adopted as a basic encoder. In the decoding phase, we feed our model with the delivery time of the market comment to generate the expressions depending on time of day to address the second problem. To address the third problem regarding with numerical values mentioned in the generated text, we allow our model to choose an arithmetic operation such as subtraction or rounding instead of generating a word. The proposed methods are evaluated on the task of generating Japanese market comments on the Nikkei Stock Average. Automatic evaluation with BLEU score (Papineni et al., 2002) and F-score of time-dependent expressions reveals that our model outperforms a baseline encoder-decoder model significantly. Furthermore, human assessment and error analysis prove that our best model generates characteristic expressions discussed above almost perfectly, approaching the fluency and the informativeness of human-generated market comments. 2 Related Work The task of generating descriptions from timeseries or structured data has been tackled in various domains such as weather forecasts (Belz, 2007; Angeli et al., 2010), healthcare (Portet et al., 2009; Banaee et al., 2013b), and sports (Liang et al., 2009). Traditionally, many studies used handcrafted rules (Goldberg et al., 1994; Dale et al., 2003; Reiter et al., 2005). On the other hand, interest has recently been growing in automatically learning a correspondence relationship from data to text and generating a description of this relationship since large-scale data in diversified formats have become easy to acquire. In fact, a data-driven approach has been extensively studied nowadays for various tasks such as image caption generation (Vinyals et al., 2015) and weather forecast generation (Mei et al., 2016b). The task, called data-to-text or concept-to-text, is generally divided into two subtasks: content selection and surface realization. Whereas previous studies tackled the subtasks separately (Barzilay and Lapata, 2005; Wong and Mooney, 2007; Lu et al., 2009), recent work has focused on solving them jointly using a single framework (Chen and Mooney, 2008; Kim and Mooney, 2010; Angeli et al., 2010; Konstas and Lapata, 2012, 2013). More recently, there has been some work on an encoder-decoder model (Sutskever et al., 2014) for generating a description from time-series or structured data to solve the subtasks jointly in a single framework, and this model has been proven to be useful (Mei et al., 2016b; Lebret et al., 2016). However, the task of generating a description from numerical time-series data presents difficulties such as the second and third problems mentioned in Section 1. For the second problem, the model needs to be fed with information on delivery time. Also, the model needs arithmetic operations such as subtraction for the third problem because even if we simply apply a copy mechanism (Gu et al., 2016; Gulcehre et al., 2016) to the model, it cannot derive a calculated value such as (3), (5), or (6) in Figure 1 from input. Thus, in this work, we tackle these problems and develop a model on the basis of the encoder-decoder model that can mention a specific numerical value by referring to the input data or producing a processed value with mathematical calculation and mention time-dependent expressions by incorporating the information on delivery time into its decoder. There has also been some work on generating market comments. Kukich (1983) developed a system consisting of rule-based components for generating stock reports from a database of daily stock quotes. Although she used several components individually and had to define a number of rules for the generation, our encoder-decoder model can 1375 perform it with fewer and simpler rules for the calculation. Aoki and Kobayashi (2016) developed a method on the basis of a weighted bi-gram language model for automatically describing trends of time-series data such as the Nikkei Stock Average. However, they did not attempt to refer to specific numerical values such as closing prices and amounts of rises in price although such descriptions are often used in market comments as shown in Figure 1 (3), (5), and (6). In contrast, we present a novel approach to generate natural language descriptions of time-series data that can not only able to describe trends of the data but also mention specific numerical values by referring to the time-series data. 3 Generating Market Comments To generate market comments on stock prices, we introduce an encoder-decoder model. Encoderdecoder models have been widely used and proven useful in various tasks of natural language generation such as machine translation (Cho et al., 2014) and text summarization (Rush et al., 2015). Our task is similar to these tasks in that the system takes sequential data and generates text. Therefore, it is natural to use an encoder-decoder model in modeling stock prices. Figure 2 illustrates our model. In describing time-series data, the model is expected to capture various types of change and important values in the given sequence, such as absolute or relative changes and maximum or minimum value, in different time-scales. Moreover, it is necessary to generate time-dependent comments and numerical values that require arithmetic operations for derivation, such as “The closing price of the morning session decreases by 5 yen...”. To achieve these, we present three strategies that alter the standard encoder-decoder model. First (Section 3.1), we use several encoding methods for time-series data, as in (1) of Figure 2, to capture the changes and important values. Second (Section 3.2), we incorporate delivery-time information into the decoder, as in (2) of Figure 2, to generate time-dependent comments. For the decoder, we use a recurrent neural network language model (RNNLM) (Mikolov et al., 2010), which is widely used in language generation tasks. Finally (Section 3.3), we extend the decoder to estimate arithmetic operations, as in (3) of Figure 2, to generate numerical values in market comments. Stock prices of one trading day Closing prices of the preceding trading days 12167.29 12278.83 ... 12451.66 12461.36 xshort 12116.57 12120.94 ... 12145.70 12150.49 xlong preprocessing preprocessing lshort hshort (1) Encoding Numerical Time-Series Data llong hlong encoder encoder concatenation (2) Incorporating Time Embedding <s> T T T T T T T T Nikkei gains more than <price1> yen . </s> (3) Estimation of Arithmetic Operations Figure 2: Overview of our model. Here lshort and llong represent two vectors of preprocessed values, and hshort and hlong indicate hidden states of the encoder. T represents a time embedding vector. 3.1 Encoding Numerical Time-Series Data We prepare short- and long-term data, using the five-minute chart of Nikkei 225. A vector for short-term data consists of the prices of one trading day and has N elements. We denote it as xshort = xshort, i  N−1 i=0 . On the other hand, a vector for long-term data consists of the closing prices of the M preceding trading days. It is denoted as xlong = xlong, i  M−1 i=0 . Data are commonly preprocessed to remove noise and enhance generalizability of a model (Zhang and Qi, 2005; Banaee et al., 2013a). We use two preprocessing methods: standardization and moving reference. Standardization substitutes each element xi of input x by xstd i = xi −µ σ , (1) where µ and σ are the mean and standard deviation of the values in the training data, respectively. Standardized values are less affected by scale. The second method, moving reference (Freitas et al., 2009), substitutes each element xi of input x by xmove i = xi −ri, (2) 1376 where ri is the closing price of the previous trading day of x. This is introduced to capture price fluctuations from the previous day. By applying one of the preprocessing methods to xshort and xlong, we obtain two vectors of preprocessed values lshort and llong. Given these, each encoder emits the corresponding hidden states hshort and hlong. After obtaining the hidden states, we concatenate the two vectors of the preprocessed values and the outputs of the encoders as a multilevel representation of the input time-series data. The multi-level representation is an approach developed by Mei et al. (2016a) that enable the decoder to take into account both the high-level representation, e.g., hshort, hlong, and the low-level representation, e.g., lshort, llong, at the same time. They have shown that it improves performance in terms of selecting salient objects in input data. We thus set the initial hidden state s0 of the decoder as s0 = lshort ⊕llong ⊕hshort ⊕hlong, (3) where ⊕is the concatenation operator. When we use both preprocessing methods, we have four preprocessed input vectors: lmove short , lstd short, lmove long , and lstd long. In this case, we introduce four encoders, and set the initial hidden state s0 of the decoder as s0 = lmove short ⊕lstd short ⊕lmove long ⊕lstd long ⊕hmove short ⊕hstd short ⊕hmove long ⊕hstd long. (4) Since several encoding methods can be used for the time-series data, we use any one of the three conventional neural networks: Multi-Layer Perceptron (MLP), Convolutional Neural Network (CNN), or Recurrent Neural Network (RNN) with Long Short-Term Memory cells (Hochreiter and Schmidhuber, 1997). In the experiments, we empirically evaluate and compare the encoding methods. 3.2 Incorporating Time Embedding Even if identical sequences of values are observed, comments usually vary in accordance with price history or the time they are observed. For instance, when the market opens, comments usually mention how much the stock price has increased or decreased compared with the closing price of the previous trading day, as in (1) and (3) in Figure 1. Our model creates vectors called time embedding vectors T on the basis of the time when the comment is delivered (e.g., 9:00 a.m. or 3:00 p.m.). Then a time embedding vector is added to each hidden state sj in decoding so that words are generated depending on time. This mechanism is inspired by speaker embedding introduced by Li et al. (2016). They use an encoder-decoder model for a conversational agent that inherits the characteristics of a speaker, such as his/her manner of speaking. They encode speaker-specific information (e.g., dialect, age, and gender) into speaker embedding vectors and used them in decoding. 3.3 Estimation of Arithmetic Operations Text generation systems based on language models such as RNNLM often generate erroneous words for named entities; that is, they often mention a similar but incorrect entity, e.g., Nissan for Toyota. To overcome this problem, Gulcehre et al. (2016) developed a text generation method called copy mechanism. The method copies rare words missing from the vocabulary from a given sequence of words using an attention mechanism, and emits the copied words. Market comments often mention numerical values that appear in the input data, but they also mention values obtained through arithmetic operations, such as differences in prices as in (3) and (6) in Figure 1, or rounded values as in (5). Thus, another problem arises: what type of operation is suitable for text to be generated? In this work, we solve this problem by extending the idea of copy mechanism. To enable our model to generate text with values calculated from input values, we add generalization tags to the vocabulary used in the model. Each generalization tag represents a type of arithmetic operation. When a generalization tag is emitted, the model performs the operations on the designated values in accordance with the tag, replaces the tag with the calculated value, and finally outputs text containing numerical values. For preprocessing, we replace each numerical value appearing in the market comments in the training data with generalization tags such as <price1>. The tag for a numerical value depends on what the value stands for in the text. Table 1 displays all the tags and the corresponding types of calculation. To illustrate, suppose a market comment says (a) Nikkei rebounds. The closing price of the morning session is 16,610 yen, which is 227 yen higher. Since this comment omits the phrase “than the 1377 Tag Arithmetic operation <price1> Return ∆ <price2> Round down ∆to the nearest 10 <price3> Round down ∆to the nearest 100 <price4> Round up ∆to the nearest 10 <price5> Round up ∆to the nearest 100 <price6> Return z as it is <price7> Round down z to the nearest 100 <price8> Round down z to the nearest 1,000 <price9> Round down z to the nearest 10,000 <price10> Round up z to the nearest 100 <price11> Round up z to the nearest 1,000 <price12> Round up z to the nearest 10,000 Table 1: Generalization tags and corresponding arithmetic operations. Here z and ∆stand for latest price and difference between z and closing price of previous trading day. closing price of the previous day”, 227 in this example indicates the difference between the closing price of the previous trading day xlong, M−1 and the latest price xshort, N−1 denoted by z in Table 1. Therefore, we replace 227 with the tag <price1>. Likewise, we replace 16,610 with <price6> because it represents the latest price z. To find the optimal tag for each value, we try all the types of operations listed in Table 1 using the values appearing in the text, i.e., 227 and 16,610 in this case. Then, we select the tag that has the operation that yields the value closest to the original one. In prediction, the model first generates a tentative comment, which includes tags as well as words. Suppose that the input vectors are xshort and xlong, with xshort, N−1 = 14508 and xlong, M−1 = 14612, and that the model generates the comment below: (b) Nikkei opens turning down. The loss exceeds <price2> yen, and it falls to the <price7> yen level. Since the tag <price2> represents “the difference between xshort, N−1 and xlong, M−1 rounded down to the nearest 10”, we replace the tag with 100. Similarly, we replace <price7>, which is “the last price xshort, N−1 rounded down to the nearest 100”, with 14,500. Finally, we have a market comment containing the numbers as below: (c) Nikkei opens turning down. The loss exceeds 100 yen, and it falls to the 14,500 yen level. 4 Experiments 4.1 Experimental Settings We used the five-minute chart of Nikkei 225 from March 2013 to October 2016 as numerical timeseries data, which were collected from IBI-Square Stocks1, and 7,351 descriptions as market comments, which are written in Japanese and provided by Nikkei QUICK News. We divided the dataset into three parts: 5,880 for training, 730 for validation, and 741 for testing. For a human evaluation, we randomly selected 100 comments and their time-series data included in the test set. We set N = 62, which is the number of time steps for stock prices for one trading day, and M = 7, which is the number of the time steps for closing prices of the preceding trading days. We used Adam (Kingma and Ba, 2015) for optimization with a learning rate of 0.001 and a mini-batch size of 100. The dimensions of word embeddings, time embeddings, and hidden states for both the encoder and decoder are set to 128, 64, and 256, respectively. For CNN, we used a single convolutional layer and set the filter size to 3. In the experiments, we conducted three types of evaluation: two for automatic evaluation, and one for human evaluation. For one automatic evaluation, we used BLEU (Papineni et al., 2002) to measure the matching degree between the market comments written by humans as references and output comments generated by our model. We applied paired bootstrap resampling (Koehn, 2004) for a significance test. For the other automatic evaluation metric, we calculate F-measures for time-dependent expressions, using market comments written by humans as references, to investigate whether our model can correctly output timedependent expressions such as “open with” and describe how the price changes compared with the previous period referring to the series of preceding prices such as “continual fall”. Specifically, we calculate F-measures for 13 expressions shown in Figure 3. For the human evaluation, we recruited a specialist in financial engineering as a judge to evaluate the quality of generated market comments. To evaluate the difference in the quality of generated comments between our models and human, we showed both system-generated and humangenerated market comments together with their 1http://www.ibi-square.jp/index.htm 1378 0.00 0.25 0.50 0.75 1.00 continual rise (zoku-shin) continual fall (zoku-raku) rebound (han-patsu) turn down (han-raku) X yen higher (X en daka no) X yen lower (X en yasu no) turn to rise (age ni tenjiru) turn to fall (sage ni tenjiru) gain (age-haba) loss (sage-haba) open (hajimaru) closing price of the morning session (zen-bike) closing price (oo-bike) F-measure Model baseline mlp-enc cnn-enc rnn-enc -short -long -std -move -multi -num -time Figure 3: F-measure values for the expressions on the test set. Each expression is accompanied by its original Japanese expression transliterated into English alphabet in parenthesis. Out of the 13 expressions, 10 on the left are expressions that describe how the price changes compared with the previous period, and 3 on the right are time-dependent expressions. Model baseline mlp-enc cnn-enc rnn-enc -short -long -std -move -multi -num -time Encoder MLP MLP CNN RNN MLP MLP MLP MLP MLP MLP MLP Input data xshort ✓ ✓ ✓ ✓ − ✓ ✓ ✓ ✓ ✓ ✓ xlong − ✓ ✓ ✓ ✓ − ✓ ✓ ✓ ✓ ✓ Preprocessing Standardization ✓ ✓ ✓ ✓ ✓ ✓ − ✓ ✓ ✓ ✓ Moving reference ✓ ✓ ✓ ✓ ✓ ✓ ✓ − ✓ ✓ ✓ Multi-level − ✓ ✓ ✓ ✓ ✓ ✓ ✓ − ✓ ✓ Arithmetic operation − ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ − ✓ Time-embedding − ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ − Table 2: Overview of the models we used in the experiments. time-series data consisting of xshort and xlong, without letting the judge know which comment is generated by which method. We asked the judge to give each market comment two scores: one for informativeness and one for fluency. Both scores have two levels, 0 or 1, where 1 indicates high informativeness or fluency. For informativeness, the judge used both generated comments and their input stock prices to rate the comments. Specifically, if the judge deem that a generated comment describes an important price movement or an outline of the movement properly, such comments are considered to be informative. For fluency, the judge read only the generated comments and rate them in terms of readability, regardless of their content of the comment. In addition, since some of the market comments written by humans sometimes include external information such as “Nikkei opens with a continual fall as yen pressures exporters”, we also asked the judge to ignore the correctness of external information mentioned in comments, for the sake of fairness in comparison, because external information cannot be retrieved from the time-series data. To assess the effectiveness of the techniques we introduced, we conducted experiments with 11 models. Table 2 shows an overview of the models Model baseline mlp-enc cnn-enc rnn-enc -short -long BLEU 0.243 0.464 0.449 0.454 0.380 0.433 Model -std -move -multi -num -time BLEU 0.455 0.393 0.435 0.318 0.395 Table 3: BLEU scores on the test set. Differences between the best model, mlp-enc, and other models are statistically significant at p < 0.05. we compared. We compared three types of models: a baseline, full models (e.g., mlp-enc), and ablated models (e.g., -short). For example, -short is a model that does not use the short-term time series. 4.2 Results Table 3 shows the BLEU scores on the test set. Figure 3 presents the F-measure of the models for each phrase. We also present output examples with human-generated market comments (Human) for reference in Figure 4. In the results for the automatic evaluation in BLEU, the model using both MLP as encoders and all the techniques we developed, mlp-enc, outperformed baseline and the other models. The BLEU scores and F-measure values revealed differences among the models using MLP, CNN, or RNN (mlp1379 15000 15500 16000 16500 17000 0 20 40 60 Time step [i] Stock price [yen] Short-term data 15000 15500 16000 16500 17000 0 2 4 6 Time step [i] Long-term data (a) Price movements of Nikkei on Feb. 15, 2016. 15000 15500 16000 16500 17000 0 20 40 60 Time step [i] Stock price [yen] Short-term data 15000 15500 16000 16500 17000 0 2 4 6 Time step [i] Long-term data (b) Price movements of Nikkei on July 21, 2016. Model F I Generated comment baseline 1 0 Nikkei heikin, han-patsu zen-bike wa 81 en daka no <unk> en Nikkei average, rebound-pop first_half-closing top 81 yen higher gen <unk> yen Nikkei rebounds. The closing price of the morning session is <unk> yen, which is 81 yen higher. mlp-enc 1 1 Nikkei heikin, oo-haba han-patsu oo-bike wa 1,069 en daka no 16,022 en Nikkei average, big-range rebound-pop big-closing top 1,069 yen higher gen 16,022 yen Nikkei significantly rebounds. The closing price is 16,022 yen, which is 1,069 yen higher. human 1 1 Nikkei heikin, oo-haba han-patsu oo-bike wa 1,069 en daka no 16,022 en Nikkei average, big-range rebound-pop big-closing top 1,069 yen higher gen 16,022 yen Nikkei significantly rebounds. The closing price is 16,022 yen, which is 1,069 yen higher. (c) Comments on price at 3:00 p.m. on February 15, 2016. Model F I Generated comment baseline 1 0 Nikkei heikin, zoku-shin de hajimaru age-haba 100 en koeru Nikkei average, continual-advance instr open-imperf raise-range 100 yen exceed-imperf Nikkei opens with a continual rise. The gain exceeds 100 yen. mlp-enc 1 1 Nikkei heikin, age-haba 200 en koeru Nikkei average, raise-range 200 yen exceed-imperf Nikkei gains more than 200 yen. human 1 1 Nikkei heikin, age-haba 200 en kosu Nikkei average, raise-range 200 yen exceed-imperf Nikkei gains more than 200 yen. (d) Comments on price at 9:00 a.m. on July 21, 2016. Figure 4: Examples of short- and long-term movements of Nikkei, and comments models made on them, where <unk> represents an unknown word. Columns F and I show scores on fluency and informativeness in human evaluation. Each example is accompanied by original Japanese comment transliterated into English alphabet, its literal translation, and the corresponding English sentence. Abbreviations used here are as follows. top: topic case, gen: genitive case, instr: instrumental case, and imperf: imperfect form of a verb. enc, cnn-enc, rnn-enc). In the comparison between the models that took two types of the time-series data xshort, xlong as input (e.g., mlp-enc or rnn-enc) and the models that only used one of them (-short, -long), the models using both types of data such as mlp-enc and rnn-enc gained higher BLEU scores than -short and -long. Also, the models that encoded the two types of time-series data to capture their short- and long-term changes correctly output more expressions that described the changes such as “turn to rise”, “continue to fall”, and “rebound” than -short and -long as shown in Figure 3. According to the comparison between preprocessing methods, mlp-enc, which used both standardization and moving reference as preprocessing methods, obtained a higher BLEU score than the models that used neither (-std, -move). In terms of the F-measure values, mlp-enc output phrases mentioning changes more appropriately and therefore achieved the higher values than the other two models as in “turn to rise” or “turn to fall” in Figure 3. Furthermore, we found that the BLEU score of -multi, which did not use the multi-level representation of the data, was inferior. In other words, incorporating the multi-level representation along with an output of an encoder into a decoder seems 1380 0.1 0.2 0.3 0.4 0.5 0 2000 4000 6000 Size of training data BLEU Model baseline mlp-enc cnn-enc rnn-enc -short -long -std -move -multi -num -time Figure 5: BLEU scores of market comments generated by models for each size of training data on the validation set. to contribute to improving the automatic evaluation and producing a better representation of the input data. baseline and -num output numerical values as “words” from the vocabulary for RNNLM because these models do not use any arithmetic operation. Therefore, there were many cases including <unk> that should be output as a numerical value as shown in Figure 4 (a). We found that -num had a lower BLEU score than the models such as mlp-enc and -std that used arithmetic operations. Furthermore, we observed that the models with arithmetic operations correctly generated stock prices in most cases. By comparing -time, which did not incorporate time-embeddings into a decoder, and other models such as mlp-enc with respect to the F-measure of expressions depending on delivery time (e.g., “open with” or “closing session”), we found that the models that took time information into account, such as mlp-enc, generated those phrases more accurately than -time. Moreover, we analyzed the effect of different sizes of training data. Figure 5 shows BLEU scores of market comments generated by our models for each size of training data on the validation set. According to the results, we found that the BLEU scores for the models saturated when we used 3000 training data. In addition, there was not much difference in convergence speed among the models. The human evaluation results in Table 4 indicate that market comments generated by our model (mlp-enc) achieved a quality comparable even to that of market comments written by humans. Moreover, we found that mlp-enc signifiModel Informativeness Fluency External Human 95 95 25 mlp-enc 85 93 1 baseline 28 100 6 Table 4: Results of human evaluation. Each score indicates number of market comments judged to be level-1. External shows number of market comments including external information. cantly outperformed baseline in terms of informativeness but was outperformed by baseline in terms of fluency. The reason was that mlp-enc occasionally generated a market comment such as “Nikkei gains more than 0 yen” because of an error in the prediction of the operation, and such comments were not considered not to be fluent or informative by the judge, although most of comments generated by mlp-enc were as fluent as those of baseline. Note that baseline does not generate expressions like “0 yen” because they are not normally used in market comments and so not included in the vocabulary. Therefore, the judge considered all the comments generated by baseline to be fluent. For another possibility to enhance our model, we have to consider that the model should mention a difference or gain for a duration from when to when. For example, our current model sometimes generated a market comment such as “Nikkei gains more than 200 yen”, although Nikkei actually gained more than 300 yen. Such a comment is not incorrect but is imprecise. Therefore, we consider that a mechanism is needed to select the period to be mentioned when the model generates a comment to this problem and increase the generalizability of our model for generating a description from various time-series data. 5 Conclusion and Future Work In this study, we presented a novel encoder-decoder model to automatically generate market comments from numerical time-series data of stock prices, using the Nikkei Stock Average as an example. Descriptions of numerical time-series data written by humans such as market comments have several writing style characteristics. For example, (1) content to be mentioned in the market comments varies depending on short- or long-term changes of the time-series data, (2) expressions depending on delivery time at which text is written are used, and (3) numerical values obtained through arith1381 metic operations applied to the input data are often described. We developed approaches for generating comments that have these characteristics and showed the effectiveness of the proposed model. In future work, we plan to apply our model to descriptions of time-series data in various domains such as weather forecasts and sports, which share the above writing-style characteristics. We also plan to use multiple time-series as input such as multiple brands of stock. Acknowledgements This paper is based on results obtained from a project commissioned by the New Energy and Industrial Technology Development Organization (NEDO). References Gabor Angeli, Percy Liang, and Dan Klein. 2010. A simple domain-independent probabilistic approach to generation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 502–512. http://aclweb.org/anthology/ D10-1049. Kasumi Aoki and Ichiro Kobayashi. 2016. Linguistic summarization using a weighted n-gram language model based on the similarity of time-series data. In Proceedings of IEEE International Conference on Fuzzy Systems. pages 595–601. https://doi.org/10. 1109/FUZZ-IEEE.2016.7737741. Hadi Banaee, Mobyen Uddin Ahmed, and Amy Loutfi. 2013a. A framework for automatic text generation of trends in physiological time series data. In Processing of IEEE International Conference on Systems, Man, and Cybernetics. pages 3876–3881. https: //doi.org/10.1109/SMC.2013.661. Hadi Banaee, Mobyen Uddin Ahmed, and Amy Loutfi. 2013b. Towards NLG for physiological data monitoring with body area networks. In Proceedings of the 14th European Workshop on Natural Language Generation. Association for Computational Linguistics, pages 193–197. http://aclweb.org/anthology/ W13-2127. Regina Barzilay and Mirella Lapata. 2005. Collective content selection for concept-to-text generation. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing. pages 331–338. http: //aclweb.org/anthology/H05-1042. Anja Belz. 2007. Probabilistic generation of weather forecast texts. In Proceedings of the 2007 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 164–171. http://aclweb. org/anthology/N07-1021. David L. Chen and Raymond J. Mooney. 2008. Learning to sportscast: A test of grounded language acquisition. In Proceedings of the 25th international conference on Machine learning. pages 128–135. https: //doi.org/10.1145/1390156.1390173. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1724–1734. https://doi.org/ 10.3115/v1/D14-1179. Robert Dale, Sabine Geldof, and Jean-Philippe Prost. 2003. CORAL: Using natural language generation for navigational assistance. In Proceedings of the 26th Australasian Computer Science Conference. pages 35–44. http://dl.acm.org/citation.cfm? id=783106.783111. Fabio D. Freitas, Alberto F. De Souza, and Ailson R. de Almeida. 2009. Prediction-based portfolio optimization model using neural networks. Neurocomputing 72(10):2155–2170. https://doi.org/10.1016/j. neucom.2008.08.019. Eli Goldberg, Norbert Driedger, and Richard I. Kittredge. 1994. Using natural-language processing to produce weather forecasts. IEEE Expert 9(2):45–53. https://doi.org/10.1109/64.294135. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 1631–1640. https://doi. org/10.18653/v1/P16-1154. Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 140–149. https://doi.org/10.18653/v1/ P16-1014. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. https://doi.org/10.1162/neco. 1997.9.8.1735. Joohyun Kim and Raymond J. Mooney. 2010. Generative alignment and semantic parsing for learning from ambiguous supervision. In Proceedings of the 23rd International Conference on Computational Linguistics. pages 543–551. http://aclweb. org/anthology/C10-2062. 1382 Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations. https://arxiv.org/abs/1412.6980. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 388–395. http://aclweb. org/anthology/W04-3250. Ioannis Konstas and Mirella Lapata. 2012. Unsupervised concept-to-text generation with hypergraphs. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pages 752–761. http://aclweb.org/anthology/N121093. Ioannis Konstas and Mirella Lapata. 2013. Inducing document plans for concept-to-text generation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1503–1514. http://aclweb.org/anthology/D13-1157. Karen Kukich. 1983. Design of a knowledge-based report generator. In Proceedings of the 21st Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 145–150. http://aclweb.org/anthology/P831022. Rémi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with application to the biography domain. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1203–1213. https://doi.org/ 10.18653/v1/D16-1128. Jiwei Li, Michel Galley, Chris Brockett, Georgios Spithourakis, Jianfeng Gao, and Bill Dolan. 2016. A persona-based neural conversation model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 994–1003. https: //doi.org/10.18653/v1/P16-1094. Percy Liang, Michael Jordan, and Dan Klein. 2009. Learning semantic correspondences with less supervision. In Proceedings of Association for Computational Linguistics and International Joint Conference on Natural Language Processing. Association for Computational Linguistics, pages 91–99. http: //aclweb.org/anthology/P09-1011. Wei Lu, Hwee Tou Ng, and Wee Sun Lee. 2009. Natural language generation with tree conditional random fields. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 400–409. http://aclweb.org/anthology/D091042. Hongyuan Mei, Mohit Bansal, and Matthew R. Walter. 2016a. Listen, attend, and walk: Neural mapping of navigational instructions to action sequences. In Proceedings of Association for the Advancement of Artificial Intelligence. https://arxiv.org/abs/1506. 04089. Hongyuan Mei, Mohit Bansal, and Matthew R. Walter. 2016b. What to talk about and how? selective generation using lstms with coarse-to-fine alignment. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 720–730. https://doi.org/10.18653/v1/N161086. Tomáš Mikolov, Martin Karafiát, Lukáš Burget, Jan Černocký, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Proceedings of the 11th Annual Conference of the International Speech Communication Association. International Speech Communication Association, 9, pages 1045–1048. http://www.isca-speech.org/ archive/interspeech_2010/i10_1045.html. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 311–318. http://aclweb. org/anthology/P02-1040. François Portet, Ehud Reiter, Albert Gatt, Jim Hunter, Somayajulu Sripada, Yvonne Freer, and Cindy Sykes. 2009. Automatic generation of textual summaries from neonatal intensive care data. Artificial Intelligence 173(7-8):789–816. https://doi.org/10. 1016/j.artint.2008.12.002. Ehud Reiter, Somayajulu Sripada, Jim Hunter, Jin Yu, and Ian Davy. 2005. Choosing words in computergenerated weather forecasts. Artificial Intelligence 167(1-2):137–169. https://doi.org/10.1016/j.artint. 2005.06.006. Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 379–389. https://doi.org/ 10.18653/v1/D15-1044. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems. pages 3104–3112. https://papers.nips.cc/paper/5346-sequence-tosequence-learning-with-neural-networks. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural 1383 image caption generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pages 3156–3164. https://arxiv.org/ abs/1411.4555. Yuk Wah Wong and Raymond Mooney. 2007. Generation by inverting a semantic parser that uses statistical machine translation. In Proceedings of the 2007 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 172–179. http://aclweb. org/anthology/N07-1022. G. Peter Zhang and Min Qi. 2005. Neural network forecasting for seasonal and trend time series. European journal of operational research 160(2):501–514. https://doi.org/10.1016/j.ejor.2003.08.037. 1384
2017
126
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1385–1393 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1127 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1385–1393 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1127 Can Syntax Help? Improving an LSTM-based Sentence Compression Model for New Domains Liangguo Wang†∗, Jing Jiang∗, Hai Leong Chieu⋆, Chen Hui Ong⋆, Dandan Song†, Lejian Liao† [email protected], [email protected] {chaileon, ochenhui}@dso.org.sg, {sdd, liaolj}@bit.edu.cn † School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China ∗School of Information Systems, Singapore Management University, Singapore ⋆DSO National Laboratories, Singapore Abstract In this paper, we study how to improve the domain adaptability of a deletion-based Long Short-Term Memory (LSTM) neural network model for sentence compression. We hypothesize that syntactic information helps in making such models more robust across domains. We propose two major changes to the model: using explicit syntactic features and introducing syntactic constraints through Integer Linear Programming (ILP). Our evaluation shows that the proposed model works better than the original model as well as a traditional non-neural-network-based model in a cross-domain setting. 1 Introduction Sentence compression is the task of compressing long, verbose sentences into short, concise ones. It can be used as a component of a text summarization system. Figure 1 shows two example input sentences and the compressed sentences written by human. The task has been studied for almost two decades. Early work on this task mostly relies on syntactic information such as constituency-based parse trees to help decide what to prune from a sentence or how to re-write a sentence (Jing, 2000; Knight and Marcu, 2000). Recently, there has been much interest in applying neural network models to solve the problem, where little or no linguistic analysis is performed except for tokenization (Filippova et al., 2015; Rush et al., 2015; Chopra et al., 2016). Although neural network-based models have achieved good performance on this task recently, they tend to suffer from two problems: (1) They require a large amount of data for training. For example, Filippova et al. (2015) used close to two In-domain Input: The southern Chinese city of Guangzhou has set up a special zone allowing foreign consulates to build permanent offices and residences and avoid prohibitive local rents, the china daily reported Tuesday. Compressed (by human): Guangzhou opens new consulate area. Compressed (by machine): Guangzhou sets up special zone for foreign consulates. Out-of-domain Input: Wherever she was, she helped other loyal and flexible wives cope. Compressed (by human): she helped other wives cope. Compressed (by machine): wives and flexible wives Figure 1: Examples of in-domain and out-ofdomain results by a standard abstractive sequenceto-sequence model trained on the Gigaword corpus. The first input sentence comes from the Gigaword corpus while the second input sentence comes from the written news corpus used by Clarke and Lapata (2008). million sentence pairs to train an LSTM-based sentence compression model. Rush et al. (2015) used about four million title-article pairs from the Gigaword corpus (Napoles et al., 2012) as training data. Although it may be easy to automatically obtain such training data in some domains (e.g., the news domain), for many other domains, it is not possible to obtain such a large amount of training data. (2) These neural network models trained on data from one domain may not work well on out-of-domain data. For example, when we trained a standard neural sequence-to-sequence model1 on 3.8 million title-article pairs from the Gigaword corpus and applied it to both in-domain data and out-of-domain data, we found that the performance on in-domain data was good but the performance on out-of-domain data could be very 1http://opennmt.net/ 1385 poor. Two example compressed sentences by this trained model are shown in Figure 1 to illustrate the comparison between in-domain and out-ofdomain performance. The two limitations above imply that these neural network-based models may not be good at learning generalizable patterns, or in other words, they tend to overfit the training data. This is not surprising because these models do not explicitly use much syntactic information, which is more general than lexical information. In this paper, we aim to study how syntactic information can be incorporated into neural network models for sentence compression to improve their domain adaptability. We hope to train a model that performs well on both in-domain and out-ofdomain data. To this end, we extend the deletionbased LSTM model for sentence compression by Filippova et al. (2015). Although deletion-based sentence compression is not as flexible as abstractive sentence compression, we chose to work on deletion-based sentence compression for the following reason. Abstractive sentence compression allows new words to be used in a compressed sentence, i.e., words that do not occur in the input sentence. Oftentimes these new words serve as paraphrases of some words or phrases in the source sentence. But to generate such paraphrases, the model needs to have seen them in the training data. Because we are interested in a cross-domain setting, the paraphrases learned in one domain may not work well in another domain if the two domains have very different vocabularies. On the other hand, a deletion-based method does not face such a problem in a cross-domain setting. Specifically, we propose two major changes to the model by Filippova et al. (2015): (1) We explicitly introduce POS embeddings and dependency relation embeddings into the neural network model. (2) Inspired by a previous method (Clarke and Lapata, 2008), we formulate the final predictions as an Integer Linear Programming problem to incorporate constraints based on syntactic relations between words and expected lengths of the compressed sentences. In addition to the two major changes above, we also use bi-directional LSTM to include contextual information from both directions into the model. We evaluate our method using around 10,000 sentence pairs released by Filippova et al. (2015) and two other data sets representing out-ofdomain data. We test both in-domain and outof-domain performance. The experimental results showed that our proposed method can achieve competitive performance compared with the original method in the single-domain setting but with much less training data (around 8,000 sentence pairs for training instead of close to two million sentence pairs). In the cross-domain setting, our proposed method can clearly outperform the original method. We also compare our method with a traditional ILP-based method using syntactic structures of sentences but not based on neural networks (Clarke and Lapata, 2008). We find that our method can outperform this baseline for both in-domain and out-of-domain data. 2 Method In this section, we present our sentence compression method that is aimed at working in a crossdomain setting. 2.1 Problem Definition Recall that we focus on deletion-based sentence compression. Our problem setup is the same as that by Filippova et al. (2015). Let us use s = (w1, w2, . . . , wn) to denote an input sentence, which consists of a sequence of words. Here wi ∈V, where V is the vocabulary. We would like to delete some of the words in s to obtain a compressed sentence that still contains the most important information in s. To represent such a compressed sentence, we can use a sequence of binary labels y = (y1, y2, . . . , yn), where yi ∈{0, 1}. Here yi = 0 indicates that wi is deleted, and yi = 1 indicates that wi is retained. We assume that we have a set of training sentences and their corresponding deletion/retention labels, denoted as D = {(sj, yj)}N j=1. Our goal is to learn a sequence labeling model from D so that for any unseen sentence s we can predict its label sequence y and thus compress the sentence. 2.2 Our Base Model We first introduce our base model, which uses LSTM to perform sequence labeling. This base model is largely based on the model by Filippova et al. (2015) with some differences, which will be explained below. We assume that each word in the vocabulary has a d-dimensional embedding vector. For input sentence s, let us use (w1, w2, . . . , wn) to denote the 1386 y1 h1 y2 h2 yn hn Right LSTM Left LSTM Word Pos Dep Figure 2: Our three-layered bi-LSTM model. Word embeddings, POS tag embeddings and dependency type embeddings are concatenated in the input layer. sequence of the word embedding vectors, where wi ∈Rd. We use a standard bi-directional LSTM model to process these embedding vectors sequentially from both directions to obtain a sequence of hidden vectors (h1, h2, . . . , hn), where hi ∈Rh. We omit the details of the bi-LSTM and refer the interested readers to the work by Graves et al. (2013) for further explanation. Following Filippova et al. (2015), our bi-LSTM has three layers, as shown in Figure 2. We then use the hidden vectors to predict the label sequence. Specifically, label yi is predicted from hi as follows: p(yi | hi) = softmax(Whi + b), (1) where W ∈R2×h and b ∈R2 are a weight matrix and a weight vector to be learned. There are some differences between our base model and the LSTM model by Filippova et al. (2015). (1) Filippova et al. (2015) first encoded the input sentence in its reverse order using the same LSTM before processing the sentence for sequence labeling. (2) Filippova et al. (2015) used only a single-directional LSTM while we use bi-LSTM to capture contextual information from both directions. (3) Although Filippova et al. (2015) did not use any syntactic information in their basic model, they introduced some features based on dependency parse trees in their advanced models. Here we follow their basic model because later we will introduce more explicit syntaxbased features. (4) Filippova et al. (2015) combined the predicted yi−1 with wi to help predict yi. This adds some dependency between consecutive labels. We do not do this because later we will introduce an ILP layer to introduce dependencies among labels. 2.3 Incorporation of Syntactic Features Note that in the base model that we presented above, there is no explicit use of any syntactic information such as the POS tags of the words or the parse tree structures of the sentences. Because we believe that syntactic information is important for learning a generalizable model for sentence compression, we would like to introduce syntactic features into our model. First, we perform part-of-speech tagging on the input sentences. For sentence s, let us use (t1, t2, . . . , tn) to denote the POS tags of the words inside, where ti ∈T and T is a POS tag set. We further assume that each t ∈T has an embedding vector (to be learned). Let us use (t1, t2, . . . , tn) (ti ∈Rp, p < |T |) to denote the sequence of POS embedding vectors of this sentence. We can then simply concatenate wi with ti as a new vector to be processed by the bi-LSTM model. Next, we perform dependency parsing on the input sentences. For each word wi in sentence s, let ri ∈R denote the dependency relation between wi and its parent word in the sentence, where R is the set of all dependency relation types. We then assume that each r ∈R has an embedding vector (to be learned). Let (r1, r2, . . . , rn) (r ∈Rq, q < |R|) denote corresponding dependency embedding vectors of this sentence. We can also concatenate wi with ri and feed the new vector to the bi-LSTM model. In our model, we combine the word embedding, POS embedding and dependency embedding into a single vector to be processed by the bi-LSTM model: xi = wi ⊕ti ⊕ri, −→ h i = LSTM− → Θ(−→ h i−1, xi), ←− h i = LSTM← − Θ(←− h i+1, xi), hi = −→ h i ⊕←− h i, where ⊕represents concatenation of vectors, and −→ Θ and ←− Θ are parameters of the bi-LSTM model. The complete model is shown in Figure 2. 1387 2.4 Global Inference through ILP Although the method above has explicitly incorporated some syntactic information into the biLSTM model, the syntactic information is used in a soft manner through the learned model weights. We hypothesize that there are also hard constraints we can impose on the compressed sentences. For example, the method above as well as the original method by Filippova et al. (2015) cannot impose any length constraint on the compressed sentences. This is because the labels y1, y2, . . . , yn are not jointly predicted. We propose to use Integer Linear Programming (ILP) to find an optimal combination of the labels y1, y2, . . . , yn for a sentence, subject to some constraints. Specifically, the ILP problem consists of two parts: the objective function, and the constraints. The Objective Function Recall that the trained bi-LSTM model above produces a probability distribution for each label yi, as defined in Eqn. (1). Let us use αi to denote the probability of yi = 1 as estimated by the bi-LSTM model. Intuitively, we would like to set yi to 1 if αi is large. Besides the probability estimated by the biLSTM model, here we also consider the depth of the word wi in the dependency parse tree of the sentence. Intuitively, a word closer to the root of the tree is more likely to be retained. In order to incorporate this observation, we define dep(wi) to be the depth of the word wi in the dependency parse tree of the sentence. The root node of the tree has a depth of 0, an immediate child of the root has a depth of 1, and so on. For example, the dependency parse tree of an example sentence together with the depth of each word is shown in Figure 3. We can see that some of the words that are deleted according to the ground truth have a relatively larger depth, such as the first “she” (with a depth of 4) and the word “flexible” (with a depth of 5). Based on these considerations, we define the objective function to be the following: max n X i=1 yi(αi −λ · dep(wi)), (2) where λ is a positive parameter to be manually set, and yi is the same as defined before, which is either 0 or 1 to indicate whether wi is deleted or not. Constraints We further introduce some constraints to capture tow considerations. The first consideration is related to the syntactic structure of a sentence, and the second consideration is related to the length of the compressed sentence. Some of the constraints are inspired by Clarke and Lapata (2008). Our constraints are listed below: (1) No missing parent: Generally, we believe that if a word is retained in the compressed sentence, its parent in the dependency parse tree should also be retained. (2) No missing child: For some dependency relations such as nsubj, if the parent word is retained, it makes sense to also keep the child word; otherwise the sentence may become ungrammatical. (3) Max length: Since we are trying to compress a sentence, we may need to impose a minimum compression rate. This could be achieved by setting a maximum value of the sum of yi. (4) Min length: We observe that the original model sometimes produces very short compressed sentences. We therefore believe that it is also important to maintain a mininum length of the compressed sentence. This can be achieved by setting a minimum value of the sum of yi. Formally, the constraints are listed as follows: n X i=1 yi <= βn, n X i=1 yi >= γn, ∀yi : yi ≤ ypi, ∀ri ∈T ′ : yi ≥ ypi, where wpi is the parent word of wi in the dependency parse tree, ri is the dependency relation type between wi and wpi, and T ′ is a set of dependency relations for which the child word is often retained when the parent word is retained in the compressed sentence. The set T ′ is derived as follows. For each dependency relation type, based on the training data, we compute the conditional probability of the child word being retained given that the parent word is retained. If this probability is higher than 90%, we include this dependency relation type in T ′. 1388 Figure 3: Dependency parse tree of an example sentence. The numbers below the words indicate the depths of the words in the tree. Words in gray are supposed to be deleted based on the ground truth. 3 Experiments 3.1 Datasets and Experiment Settings Because we are mostly interested in a crossdomain setting where the model is trained on one domain and test on a different domain, we need data from different domains for our evaluation. Here we use three datasets. Google News: The first dataset contains 10,000 sentence pairs collected and released by Filippova et al. (2015)2. The sentences were automatically acquired from the web through Google News using a method introduced by Filippova and Altun (2013). The news articles were from 2013 and 2014. BNC News: The second dataset contains around 1,500 sentence pairs collected by Clarke and Lapata (2008)3. The sentences were from British National Corpus (BNC) and the American News Text corpus before 2008. Research Papers: The last dataset contains 100 sentences taken from 10 randomly selected papers published at the ACL conference in 2016. For Google News and BNC News, we have the ground truth compressed sentences, which are deletion-based compressions, i.e., subsequences of the original sentences. For Research Papers, we use it only for manual evaluation in terms of readability and informativeness, as we will explain below. We evaluate three settings of our method: BiLSTM: In this setting, we use only the base biLSTM model without incorporating any syntactic feature. BiLSTM+SynFeat: In this setting, we combine word embeddings with POS embeddings and de2Available at http://storage.googleapis. com/sentencecomp/compression-data.json. 3Available at http://jamesclarke.net/ research/resources/. pendency embeddings as input to the bi-LSTM model and use the predictions of y from the biLSTM model. BiLSTM+SynFeat+ILP: In this setting, on top of BiLSTM+SynFeat, we solve the ILP problem as described in Section 2.4 to predict the final label sequence y. In the experiments, our model was trained using the Adam (Kingma and Ba, 2015) algorithm with a learning rate initialized at 0.001. The dimension of the hidden layers of bi-LSTM is 100. Word embeddings are initialized from GloVe 100dimensional pre-trained embeddings (Pennington et al., 2014). POS and dependency embeddings are randomly initialized with 40-dimensional vectors. The embeddings are all updated during training. Dropping probability for dropout layers between stacked LSTM layers is 0.5. The batch size is set as 30. For the ILP part, λ is set to 0.5, β and γ are turned by the validation data and finally they are set to 0.7 and 0.2, respectively. We utilize an open source ILP solver4 in our method. We compare our methods with a few baselines: LSTM: This is the basic LSTM-based deletion method proposed by (Filippova et al., 2015). We report both the performance they achieved using close to two million training sentence pairs and the performance of our re-implementation of their model trained on the 8,000 sentence pairs. LSTM+: This is advanced version of the model proposed by Filippova et al. (2015), where the authors incorporated some dependency parse tree information into the LSTM model and used the prediction on the previous word to help the prediction on the current word. Traditional ILP: This is the ILP-based method proposed by Clarke and Lapata (2008). This method does not use neural network models and 4gnu.org/software/glpk 1389 is an unsupervised method that relies heavily on the syntactic structures of the input sentences5. Abstractive seq2seq: This is an abstractive sequence-to-sequence model trained on 3.8 million Gigaword title-article pairs as described in Section 1. 3.2 Automatic Evaluation With the two datasets Google News and BNC News that have the ground truth compressed sentences, we can perform automatic evaluation. We first split the Google News dataset into a training set, a validation set and a test set. We took the first 1,000 sentence pairs from Google News as the test set, following the same practice as Filippova et al. (2015). We then use 8,000 of the remaining sentence pairs for training and the other 1,000 sentence pairs for validation. For the NBC News dataset, we use it only as a test set, applying the sentence compression models trained from the 8,000 sentence pairs from Google News. We use the ground truth compressed sentences to compute accuracy and F1 scores. Accuracy is defined as the percentage of tokens for which the predicted label yi is correct. F1 scores are derived from precision and recall values, where precision is defined as the percentage of retained words that overlap with the ground truth, and recall is defined as the percentage of words in the ground truth compressed sentences that overlap with the generated compressed sentences. We report both in-domain performance and cross-domain performance in Table 1. From the table, we have the following observations: (1) For the abstractive sequence-to-sequence model, it was trained on the Gigaword data, so for both Google News and NBC News, the performance shown is cross-domain performance. We can see that indeed this abstractive method performed poorly in cross-domain settings. (2) In the in-domain setting, with the same amount of training data (8,000), our BiLSTM method with syntactic features (BiLSTM+SynFeat and BiLSTM+SynFeat+ILP) performs similarly to or better than the LSTM+ method proposed by Filippova et al. (2015), in terms of both F1 and accuracy. This shows that our method is comparable to the LSTM+ method in the in-domain setting. (3) In the in-domain setting, even compared with the 5We use an open source implementation: https:// github.com/cnap/sentence-compression. 1000 2000 3000 4000 5000 6000 7000 8000 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 F1 value In-domain data LSTM+(Filippova et al.) Bi_LSTM Bi_LSTM+SynFeat Bi_LSTM+SynFeat+ILP Traditional ILP 1000 2000 3000 4000 5000 6000 7000 8000 training size 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 F1 value Out-of-domain data Figure 4: F1 scores with different sizes of training data for in-domain and cross-domain settings. performance of LSTM+ trained on 2 million sentence pairs, our method trained on 8,000 sentence pairs does not perform substantially worse. (4) In the out-of-domain setting, our BiLSTM+SynFeat and BiLSTM+SynFeat+ILP methods clearly outperform the LSTM and LSTM+ methods. This shows that by incorporating more syntactic features, our methods learn a sentence compression model that is less domain-dependent. (5) The Traditional ILP method also works better than the LSTM and LSTM+ methods in the out-of-domain setting. This is probably because the Traditional ILP method relies heavily on syntax, which is less domain-dependent compared with lexical patterns. But the Traditional ILP method performs worse in the in-domain setting than both the LSTM and LSTM+ methods and our methods. Overall, Table 1 shows that our proposed method combines both the strength of neural network models in the in-domain setting and the strength of the syntax-based methods in the crossdomain setting. Therefore, our method works reasonably well for both in-domain and out-ofdomain data. We also notice that on Google News, adding the ILP layer decreased the sentence compression performance. After some analysis, we think the reason is that some of the constraints used in the ILP layer have led to less deletion but the ground truth compressed sentences in the Google News data tend to be shorter compared with those in the NBC News data. We also conduct additional experiments to see the effect of the training data size on our meth1390 size of Google News NBC News training data F1 Acc CR F1 Acc CR LSTM (Filippova et al., 2015) 2 million 0.80 0.39 LSTM+ (Filippova et al., 2015) 2 million 0.82 0.38 Traditional ILP (Clarke and Lapata, 2008) N/A 0.54 0.56 0.62 0.64 0.56 0.56 Abstractive seq2seq 3.8M 0.09 0.02 0.16 0.14 0.06 0.21 LSTM (our implementation) 8000 0.74 0.75 0.45 0.51 0.48 0.37 LSTM+ (our implementation) 8000 0.77 0.78 0.47 0.54 0.51 0.38 BiLSTM 8000 0.75 0.76 0.43 0.52 0.50 0.34 BiLSTM+SynFeat 8000 0.80 0.82 0.43 0.57 0.54 0.37 BiLSTM+SynFeat+ILP 8000 0.78 0.78 0.57 0.66 0.58 0.53 Table 1: Automatic evaluation of our sentence compression methods. CR standards for compression rate and is defined as the average percentage of words that are retained after compression. ods and the LSTM+ method. Figure 4 shows the F1 scores on the in-domain Google News data and the out-of-domain NBC News data when we train the models using different amounts of sentence pairs. We can see that in the in-domain setting, our method does not have any advantage over the LSTM+ method. But in the cross-domain setting, our method that uses ILP to impose syntax-based constraints clearly performs better than LSTM+ when the amount of training data is relatively small. 3.3 Manual Evaluation The evaluation above does not look at the readability of the compressed sentences. In order to evaluate whether sentences generated by our method are readable, we adopt the manual evaluation procedure by Filippova et al. (2015) to compare our method with LSTM+ and Traditional ILP in terms of readability and informativeness. We asked two raters to score a randomly selected set of 100 sentences from the Research Papers dataset. The compressed sentences were randomly ordered and presented to the human raters to avoid any bias. The raters were asked to score the sentences on a five-point scale in terms of both readability and informativeness. We show the average scores of the three methods we compare in Table 3. We can see that our BiLSTM+SynFeat+ILP method clearly outperforms the two baseline methods in the manual evaluation. We also show a small sample of input sentences from the Research Papers dataset and the automatically compressed sentences by different methods in Table 2. As we can see from the table, a general weakness of the LSTM+ method is that the compressed sentences may not be grammatical. In comparison, our method does better in terms of preserving grammaticality. 4 Related Work Sentence compression can be seen as sentencelevel summarization. Similar to document summarization, sentence compression methods can be divided into extractive compression and abstractive compression methods, based on whether words in the compressed sentence all come from the source sentence. In this paper, we focus on deletion-based sentence compression, which is a spacial case of extractive sentence compression. An early work on sentence compression was done by Jing (2000), who proposed to use several resources to decide whether a phrase in a sentence should be removed or not. Knight and Marcu (2000) proposed to apply a noisy-channel model from machine translation to the sentence compression task, but their model encountered the problem that many SCFG rules have unreliable probability estimates with inadequate data. Galley and McKeown (2007) tried to solve this problem by utilizing parent annotation, Markovization and lexicalization, which have all been shown to improve the quality of the rule probability estimates. Cohn and Lapata (2007) formulated sentence compression as a tree-to-tree rewrite problem. They utilized a synchronous tree substitution grammar (STSG) to license the space of all possible rewrites. Each rule has a weight learned from the training data. For prediction, an algorithm was used to search for the best scoring compression using the gram1391 Although dynamic oracles are widely used in dependency parsing and available for most standard transition systems , no dynamic oracle parsing model has yet been proposed for phrase structure grammars T: Although are used for transition systems model has been proposed for structure grammars . S: Although dynamic oracles are . B: Although oracles are used no model has been proposed for structure grammars . As described above , we used Bayesian Optimization to find optimal hyperparameter configurations in fewer steps than in regular grid search . T: As described we used Optimization to find configurations in steps in search . S: As described above Optimization to find optimal hyperparameter configurations steps than in grid search . B: As described , we used Bayesian Optimization to find optimal hyperparameter configurations in steps. Following the phrase structure of a source sentence , we encode the sentence recursively in a bottom-up fashion to produce a vector representation of the sentence and decode it while aligning the input phrases and words with the output . T: Following structure of a sentence we encode sentence recursively to produce a representation of the sentence and decode it while aligning phrases and words with output . S: Following the structure of a source sentence encode the sentence recursively in a bottom-up fashion . B: Following the structure , we encode the sentence recursively in a bottom-up fashion to produce a vector representation and decode it . Table 2: Some input sentences from the Research Papers dataset and the automatically compressed sentences using different methods. T: Traditional ILP method. S: LSTM+. B: BiLSTM+SynFeat+ILP. readability informativeness Traditional ILP 3.94 3.33 LSTM+ 3.69 3.07 BiLSTM+SynFeat+ILP 4.29 3.46 Table 3: Manual evaluation. mar rules. Besides, Cohn and Lapata (2008) extended this model to abstractive sentence compression, which includes substitution, reordering and insertion. McDonald (2006) proposed a graphbased sentence compression method. The general idea is that each word pair in the original sentence has a score. The task then becomes how to find a compressed sentence with a length limit according word pair scores. Their method is similar to graphbased dependency parsing. Clarke and Lapata (2008) first used an ILP framework for sentence compression. In the paper, the author put forward three models. The first model is a language model reformulated by ILP. As the first model treats all the words equally, the second model uses a corpus to learn an importance score for each word and then incorporates it in the ILP model. The Last model, which is based on (McDonald, 2006), replaces the decoder with an ILP model and adds many linguistic constraints such as dependency parsing compared with the previous two ILP models. Filippova and Strube (2008) represented sentences with dependency parse trees and an ILPbased method was used to decide whether the dependencies were preserved or not. Different from most previous work that treats sentence extraction and sentence compression separately, BergKirkpatrick et al. (2011) jointly model the two processes in one ILP problem. Bigrams and subtrees are represented by some features, and feature are learned on some training data. The ILP problem maximizes the coverage of weighted bigrams and deleted subtrees of the summary. In recent years, neural network models, especially sequence-to-sequence models, have been applied to sentence compression. Our work is based on the deletion-based LSTM model for sentence compression by Filippova et al. (2015). There has also been much interest in applying sequence-to-sequence models for abstractive sentence compression (Rush et al., 2015; Chopra et al., 2016). As we pointed out in Section 1, in a cross-domain setting, abstractive sentence compression may not be suitable. 5 Conclusions In this paper, we studied how to modify an LSTM model for deletion-based sentence compression so that the model works well in a cross-domain setting. We hypothesized that incorporation of syntactic information into the training of the LSTM model would help. We thus proposed two ways to incorporate syntactic information, one through directly adding POS tag embeddings and dependency type embeddings, and the other through the objective function and constraints of an Integer Linear Programming (ILP) model. The experiments showed that our proposed bi-LSTM model with syntactic features and an ILP layer works 1392 well in both in-domain and cross-domain settings. In comparison, the original LSTM model does not work well in the cross-domain setting, and a traditional ILP method does not work well in the in-domain setting. Therefore, our proposed method is relatively more robust than these baselines. We also manually evaluated the compressed sentences generated by our method and found that the method works better than the baselines in terms of both readability and informativeness. Acknowledgment This work is supported by DSO grant DSOCL15223. The work was conducted during the first author’s visit to the Singapore Management University. References Taylor Berg-Kirkpatrick, Dan Gillick, and Dan Klein. 2011. Jointly learning to extract and compress. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics. Sumit Chopra, Michael Auli, and Alexander M. Rush. 2016. Abstractive sentence summarization with attentive recurrent neural networks. In Proceedings the 2016 Conference of the North American Chapter of the Association for Computational Linguistics. James Clarke and Mirella Lapata. 2008. Global inference for sentence compression: An integer linear programming approach. Journal of Artificial Intelligence Research . Trevor Cohn and Mirella Lapata. 2007. Large margin synchronous generation and its application to sentence compression. In Joint Meeting of Conference on Empirical Methods in Natural Language and Conference on Computational Natural Language Learning. Trevor Cohn and Mirella Lapata. 2008. Sentence compression beyond word deletion. In Proceedings of the 22nd International Conference on Computational Linguistics. Katja Filippova, Enrique Alfonseca, Carlos A. Colmenares, Lukasz Kaiser, and Oriol Vinyals. 2015. Sentence compression by deletion with LSTMs. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Katja Filippova and Yasemin Altun. 2013. Overcoming the lack of parallel data in sentence compression. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Katja Filippova and Michael Strube. 2008. Dependency tree based sentence compression. In Proceedings of the Fifth International Natural Language Generation Conference. Michel Galley and Kathleen McKeown. 2007. Lexicalized markov grammars for sentence compression. In Proceedings of Annual Conference of the North American Chapter of the Association for Computational Linguistics. Alex Graves, Navdeep Jaitly, and Abdel-rahman Mohamed. 2013. Hybrid speech recognition with deep bidirectional lstm. In Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop on. Hongyan Jing. 2000. Sentence reduction for automatic text summarization. In Proceedings of the sixth conference on Applied natural language processing. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations. Kevin Knight and Daniel Marcu. 2000. Statisticsbased summarization step one: Sentence compression. In Proceedings of the 17th National Conference on Artificial Intelligence. Ryan T McDonald. 2006. Discriminative sentence compression with soft syntactic evidence. In Proceedings of European Chapter of the Association for Computational Linguistics Valencia. Courtney Napoles, Matthew Gormley, and Benjamin Van Durme. 2012. Annotated Gigaword. In Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. 1393
2017
127
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1394–1404 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1128 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1394–1404 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1128 Transductive Non-linear Learning for Chinese Hypernym Prediction Chengyu Wang1, Junchi Yan2,1, Aoying Zhou1, Xiaofeng He1∗ 1 Shanghai Key Laboratory of Trustworthy Computing, East China Normal University 2 IBM Research – China [email protected], [email protected] {ayzhou,xfhe}@sei.ecnu.edu.cn Abstract Finding the correct hypernyms for entities is essential for taxonomy learning, finegrained entity categorization, knowledge base construction, etc. Due to the flexibility of the Chinese language, it is challenging to identify hypernyms in Chinese accurately. Rather than extracting hypernyms from texts, in this paper, we present a transductive learning approach to establish mappings from entities to hypernyms in the embedding space directly. It combines linear and non-linear embedding projection models, with the capacity of encoding arbitrary language-specific rules. Experiments on real-world datasets illustrate that our approach outperforms previous methods for Chinese hypernym prediction. 1 Introduction A hypernym of an entity characterizes the type or the class of the entity. For example, the word country is the hypernym of the entity Canada. The accurate prediction of hypernyms benefits a variety of NLP tasks, such as taxonomy learning (Wu et al., 2012; Fu et al., 2014), fine-grained entity categorization (Ren et al., 2016), knowledge base construction (Suchanek et al., 2007), etc. In previous work, the detection of hypernyms requires lexical, syntactic and/or semantic analysis of relations between entities and their respective hypernyms from a language-specific knowledge source. For example, Hearst (1992) is the pioneer work to extract is-a relations from a text corpus based on handcraft patterns. The followingup work mostly focuses on is-a relation extraction using automatically generated patterns (Snow ∗Corresponding author. et al., 2004; Ritter et al., 2009; Sang and Hofmann, 2009; Kozareva and Hovy, 2010) and relation inference based on distributional similarity measures (Kotlerman et al., 2010; Lenci and Benotto, 2012; Shwartz et al., 2016). While these approaches have relatively high precision over English corpora, extracting hypernyms for entities is still challenging for Chinese. From the linguistic perspective, Chinese is a lower-resourced language with very flexible expressions and grammatical rules (Wang et al., 2015). For instance, there are no word spaces, explicit tenses and voices, and distinctions between singular and plural forms in Chinese. The order of words can be changed flexibly in sentences. Hence, as previous research indicates, hypernym extraction methods for English are not necessarily suitable for the Chinese language (Fu et al., 2014; Wang et al., 2015; Wang and He, 2016). Based on such conditions, several classification methods are proposed to distinguish is-a and notis-a relations based on Chinese encyclopedias (Lu et al., 2015; Li et al., 2015). Similar to Princeton WordNet, a few Chinese wordnets have also been developed (Huang et al., 2004; Xu et al., 2008; Wang and Bond, 2013). The most recent approaches for Chinese is-a relation extraction (Fu et al., 2014; Wang and He, 2016) use word embedding based linear projection models to map embeddings of hyponyms to those of their hypernyms, which outperform previous algorithms. However, we argue that these projection-based methods may have three potential limitations: (i) Only positive is-a relations are used for projection learning. The distinctions between is-a and not-is-a relations in the embedding space are not modeled. (ii) These methods lack the capacity to encode linguistic rules, which are designed by linguists and usually have high precision. (iii) It assumes that the linguistic regularities of is-a rela1394 tions can be solely captured by single or multiple linear projection models. In this paper, we address these limitations by a two-stage transductive learning approach. It distinguishes is-a and not-is-a relations given a Chinese word/phrase pair as input. In the initial stage, we train linear projection models on positive and negative training data separately and predict isa relations jointly. In the transductive learning stage, the initial prediction results, linguistic rules and the non-linear mappings from entities to hypernyms are optimized simultaneously in a unified framework. This optimization problem can be efficiently solved by blockwise gradient descent. We evaluate our method over two public datasets and show that it outperforms state-of-the-art approaches for Chinese hypernym prediction. The rest of this paper is organized as follows. We summarize the related work in Section 2. Our approach is introduced in Section 3. Experimental results are presented in Section 4. We conclude our paper in Section 5. 2 Related Work In this section, we overview the related work on hypernym prediction and discuss the challenges of Chinese hypernym detection. Pattern based methods identify is-a relations from texts by handcraft or automatically generated patterns. Hearst patterns (Hearst, 1992) are lexical patterns in English that are employed to extract isa relations for taxonomy construction (Wu et al., 2012). Automatic approaches mostly use iterative learning paradigms such that the system learns new is-a relations and patterns simultaneously. A few relevant studies can be found in (Caraballo, 1999; Etzioni et al., 2004; Sang, 2007; Pantel and Pennacchiotti, 2006; Kozareva and Hovy, 2010). To avoid “semantic drift” in iterations, Snow et al. (2004) train a hypernym classifier based on syntactic features based on parse trees. Carlson et al. (2010) exploit multiple learners to extract relations via coupled learning. These approaches are not effective for Chinese for two reasons: i) Chinese is-a relations are expressed in a highly flexible manner (Fu et al., 2014) and ii) the accuracy of basic NLP tasks such as dependency parsing still need improvement for Chinese (Li et al., 2013). Inference based methods take advantage of distributional similarity measures (DSM) to infer relations between words. They assume that a hypernym may appear in all contexts of the hyponyms and a hyponym can only appear in part of the contexts of its hypernyms. In previous work, Kotlerman et al. (2010) design directional DSMs to model the asymmetric property of is-a relations. Other DSMs are introduced in (Bhagat et al., 2007; Szpektor et al., 2007; Lenci and Benotto, 2012; Santus et al., 2014). Shwartz et al. (2016) combine dependency parsing and DSM to improve the performance of hypernymy detection. The reason why DSM is not effective for Chinese is that the contexts of entities in Chinese are flexible and sparse. Encyclopedia based methods take encyclopedias as knowledge sources to construct taxonomies. Ponzetto and Strube (2007) design features from multiple aspects to predict is-a relations between entities and categories in English Wikipedia. The taxonomy in YAGO (Suchanek et al., 2007) is constructed by linking conceptual categories in Wikipedia to WordNet synsets (Miller, 1995). For Chinese, Li et al. (2015) propose an SVM-based approach to build a large Chinese taxonomy from Wikipedia. Similar classification based algorithms are presented in (Fu et al., 2013; Lu et al., 2015). Due to the lack of Chinese version of WordNet, several Chinese semantic dictionaries have been conducted, such as Sinica BOW (Huang et al., 2004), SEW (Xu et al., 2008), COW (Wang and Bond, 2013), etc. These approaches have higher accuracy than mining hypernym relations from texts directly. However, they heavily rely on existing knowledge sources and are difficult to extend to different domains. To tackle these challenges, word embedding based methods directly model the task of hypernym prediction as learning a mapping from entity vectors to their respective hypernym vectors in the embedding space. The vectors can be pretrained by neural language models (Mikolov et al., 2013). For the Chinese language, Fu et al. (2014) train piecewise linear projection models based on a Chinese thesaurus. The state-of-the-art method (Wang and He, 2016) combines an iterative learning procedure and Chinese Hearst-style patterns to improve the performance of projection models. They can reduce data noise by avoiding direct parsing of Chinese texts, but still capture the linguistic regularities of is-a relations based on word embeddings. Additionally, several work aims to study how to combine word embeddings for re1395 lation classification, such as (Mirza and Tonelli, 2016). In our paper, we extend these approaches by modeling non-linear mappings from entities to hypernyms and adding linguistic rules via a unified transductive learning framework. 3 Proposed Approach This section begins with a brief overview of our approach. After that, the detailed steps and the learning algorithm are introduced in detail. 3.1 Overview Given a word/phrase pair (xi, yi), the goal of our task is to learn a classification model to predict whether yi is the hypernym of xi. As illustrated in Figure 1, our approach has two stages: initial stage and transductive learning stage. The input is a positive is-a set D+, a negative is-a set D−and an unlabeled set DU, all of which are the collections of word/phrase pairs. Denote xi as the embedding vector of word xi, pre-trained and stored in a lookup table. In the initial stage, we train a linear projection model over D+ such that for each (xi, yi) ∈D+, a projection matrix maps the entity vector xi to its hypernym vector yi. A similar model is also trained over D−. Based on the two models, we estimate the prediction score and the confidence score for each (xi, yi) ∈DU. In the transductive learning stage, a joint optimization problem is formed to learn the final prediction score for each (xi, yi) ∈DU. It aims to minimize the prediction errors based on the human labeled data, the initial model prediction and linguistic rules. It also employs nonlinear mappings to capture linguistic regularities of is-a relations other than linear projections. Initial Stage Positive Is-a Set Negative Is-a Set Unlabeled Set Positive Projection Model Negative Projection Model Lookup Table Linguistic Rules Transductive Learning Model Transductive Learning Stage Figure 1: General framework of our approach. 3.2 Initial Model Training The initial stage models how entities are mapped to their hypernyms or non-hypernyms by projection learning. We first train a Skip-gram model (Mikolov et al., 2013) to learn word embeddings over a large text corpus. Inspired by (Fu et al., 2014; Wang and He, 2016), for each (xi, yi) ∈ D+, we assume there is a positive projection model such that M+xi ≈yi where M+ is an |xi|×|xi| projection matrix1. However, this model does not capture the semantics of not-is-a relations. Thus, we learn a negative projection model M−xi ≈yi where (xi, yi) ∈D−. This approach is equivalent to learning two separate translation models within the same semantic space. For parameter estimation, we minimize the two following objectives: J(M+) = 1 2 X (xi,yi)∈D+ ∥M+xi−yi∥2 2+λ 2 ∥M+∥2 F J(M−) = 1 2 X (xi,yi)∈D− ∥M−xi−yi∥2 2+λ 2 ∥M−∥2 F where λ > 0 is a Tikhonov regularization parameter (Golub et al., 1999). In the testing phase, for each (xi, yi) ∈ DU, denote d+(xi, yi) = ∥M+xi −yi∥2 and d−(xi, yi) = ∥M−xi−yi∥2. The prediction score is defined as: score(xi, yi) = tanh(d−(xi, yi) −d+(xi, yi)) where score(xi, yi) ∈(−1, 1). Higher prediction score indicates there is a larger probability of an is-a relation between xi and yi. We choose the hyperbolic tangent function rather than the sigmoid function to avoid the widespread saturation of sigmoid function (Menon et al., 1996). Because the semantics of Chinese is-a and not-is-a relations are complicated and difficult to model (Fu et al., 2014), we do not impose explicit connections between M+ and M−and let the algorithm learn the parameters automantically. The difference between d+(xi, yi) and d−(xi, yi) can be also used to indicate whether the models are confident enough to make a prediction. 1We have also examined piecewise linear projection models proposed in (Fu et al., 2014; Wang and He, 2016) as the initial models for transductive learning. However, we found that this practice is less efficient and the performance does not improve significantly. 1396 In this paper, we calculate the confidence score as: conf(xi, yi) = |d+(xi, yi) −d−(xi, yi)| max{d+(xi, yi), d−(xi, yi)} where conf(xi, yi) ∈(0, 1). Higher confidence score means that there is a larger probability that the models can predict whether there is an is-a relation between xi and yi correctly. This score gives different data instances different weights in the transductive learning stage. 3.3 Transductive Non-linear Learning Although linear projection methods are effective for Chinese hypernym prediction, it does not encode non-linear transformation and only leverages the positive data. We present an optimization framework for non-linear mapping utilizing both labeled and unlabeled data and linguistic rules by transductive learning (Gammerman et al., 1998; Chapelle et al., 2006). Let Fi be the final prediction score of the word/phrase pair (xi, yi). In the initialization stage of our algorithm, we set Fi = 1 if (xi, yi) ∈ D+, Fi = −1 if (xi, yi) ∈D−and set Fi randomly in (−1, 1) if (xi, yi) ∈DU. In matrix representation, denote F as the m × 1 final prediction vector where m = |D+| + |D−| + |DU|. Fi is the ith element in F. The three components in our transductive learning model are as follows: 3.3.1 Initial Prediction Denote S as an m×1 initial prediction vector. We set Si = 1 if (xi, yi) ∈D+, Si = −1 if (xi, yi) ∈ D−and Si = score(xi, yi) if (xi, yi) ∈DU. In order to encode the confidence of model prediction, we define W as an m × m diagonal weight matrix. The element in the ith row and the jth column of W is set as follows: Wi,j =      conf(xi, yi) i = j, (xi, yi) ∈DU 1 i = j, (xi, yi) ∈D+ ∪D− 0 Otherwise The objective function is defined as: Os = ∥W(F −S)∥2 2, which encodes the hypothesis that the final prediction should be similar to the initial prediction for unlabeled data or human labeling for training data. The weight matrix W gives the largest weight (i.e., 1) to all the pairs in D+ ∪D− and a larger weight to the pair (xi, yi) ∈DU if the initial prediction is more confident. 3.3.2 Linguistic Rules Although linguistic rules can only cover a few circumstances, they are effective to guide the learning process. For Chinese hypernym prediction, Li et al. (2015) study the word formation of conceptual categories in Chinese Wikipedia. In our model, let C be the collection of linguistic rules. γi is the true positive (or negative) rate with respect to the respective positive (or negative) rule ci ∈C, estimated over the training set. Considering the word formation of Chinese entities and hypernyms, we design one positive rule (i.e., P1) and two negative rules (i.e., N1 and N2), shown in Table 1. Let R be an m × 1 linguistic rule vector and Ri is the ith element in R. For training data, we set Ri = 1 if (xi, yi) ∈D+ and Ri = −1 if (xi, yi) ∈D−, which follows the same settings as those in S. For unlabeled pairs that do not match any linguistic rules in C, we update Ri = Fi in each iteration of the learning process, meaning no loss for errors imposed in this part. For other conditions, denote C(xi,yi) ⊆C as the collection of rules that (xi, yi) matches. If C(xi,yi) are positive rules, we set Ri as follows: Ri = max{Fi, max cj∈C(xi,yi) γj} Similarly, if C(xi,yi) are negative rules, we have: Ri = −max{−Fi, max cj∈C(xi,yi) γj} which means Fi receives a penalty only if Fi < maxcj∈C(xi,yi) γj for pairs that match positive rules or Fi > −maxcj∈C(xi,yi) γj for negative rules2. The objective function is: Or = ∥F−R∥2 2. In this way, our model can integrate arbitrary “soft” constraints, making it robust to false positives or negatives introduced by these rules. 3.3.3 Non-linear Learning TransLP is a transductive label propagation framework (Liu and Yang, 2015) for link prediction, previously used for applications such as text classification (Xu et al., 2016). In our work, we extend their work for our task, modeling non-linear mappings from entities to hypernyms. 2We do not consider the cases where a pair matches both positive and negative rules because such cases are very rare, and even non-existent in our datasets. However, our method can deal with these cases by using some simple heuristics. For example, we can update Ri using either of the following two ways: i) Ri = Fi and ii) Ri = Fi + P cj∈C(xi,yi) γj. 1397 P1 The head word of the entity x matches that of the candidate hypernym y. For example, 动物 (Animal) is the correct hypernym of 哺乳动物(Mammal). N1 The head word of the entity x matches the non-head word of the candidate hypernym y. For example, 动物学(Zoology) is not a hypernym of 哺乳动物(Mammal). N2 The head word of the candidate hypernym y matches an entry in a Chinese lexicon extended based on the lexicon used in Li et al. (2015). It consists of 184 non-taxonomic, thematic words such as 政治(Politics), 军事(Military), etc. Table 1: Three linguistic rules used in our work for Chinese hypernym prediction. For is-a relations, we find that if y is the hypernym of x, it is likely that y is the hypernym of entities that are semantically close to x. For example, if we know United States is a country, we can infer country is the hypernym of similar entities such as Canada, Australia, etc. This intuition can be encoded in the similarity of the two pairs pi = (xi, yi) and pj = (xj, yj): sim(pi, pj) = ( cos(xi, xj) yi = yj 0 otherwise (1) where xi is the embedding vector of xi3. This similarity indicates there exists a nonlinear mapping from entities to hypernyms, which can not be encoded in linear projection based methods (Fu et al., 2014; Wang and He, 2016). Based on TransLP (Liu and Yang, 2015), this intuition can be model as propagating class labels (is-a or not-is-a) of labeled word/phrase pairs to similar unlabeled ones based on Eq. (1). For example, the score of is-a relations between United State and country will propagate to pairs such as (Canada, country) and (Australia, country) by random walks. Denote F∗as the optimal solution of the problem min Os + Or. Inspired by (Liu and Yang, 2015; Xu et al., 2016), we can add a Gaussian prior N(F∗, Σ) to F where Σ is the covariance matrix and Σi,j = sim(pi, pj). Hence the optimization objective of this part is defined as: On = FT Σ−1F which is linearly proportional to the negative likelihood of the Gaussian random field prior. This means we minimize the training error and encourage F to have a smooth propagation with respect to the similarities among pairs defined by Eq. (1) at the same time. 3We only consider the similarity between entities and not candidate hypernyms because the similar rule for candidate hypernyms is not true. For example, nouns close to country in our Skip-gram model are region, department, etc. They are not all correct hypernyms of United States, Canada, Australia, etc. 3.3.4 Joint Optimization By combining the three components together, we minimize the following function: J(F) = Os + Or + µ1 2 On + µ2 2 ∥F∥2 2 (2) where ∥F∥2 2 imposes an additional smooth l2regularization on F. µ1 and µ2 are regularization parameters that can be tuned manually. Based on the convexity of the optimization problem, we can learn the optimal values of F is via gradient descent. The derivative of F with respect to J(F) is: dJ(F) dF = W2(F−S)+(F−R)+µ1Σ−1F+µ2F which is computationally expensive when m is large. After W2, S, R and Σ−1 are pre-computed, the runtime complexity of the loop of gradient descent is O(tm2) where t is the number of iterations. To speed up the learning process, we introduce a blockwise gradient descent technique. From the definition of Eq. (2), we can see that the optimal values of Fi and Fj with respect to (xi, yi) and (xj, yj) are irrelevant if yi ̸= yj. Therefore, the original optimization problem can be decomposed and solved separately according to different candidate hypernyms. Let H be the collection of candidate hypernyms in DU. For each h ∈H, denote Dh as the collection of word/phase pairs in D+ ∪D−∪DU that share the same candidate hypernym h. The original problem can be decomposed into |H| optimization subproblems over Dh for each h ∈H. Denote Wh, Sh, Rh, Fh and Σh as the weight matrix, the initial prediction vector, the rule prediction vector, the final prediction vector and the entity similarity covariance matrix with respect Dh. The objective function can be rewritten as: 1398 J(F) = P h∈H ˜J(Fh) where ˜J(Fh) = ∥Wh(Fh −Sh)∥2 2 + ∥Fh −Rh∥2 2 +µ1 2 FT h Σ−1 h Fh + µ2 2 ∥Fh∥2 2 We additionally use (n) to denote the values of matrices or vectors in the nth iteration. F(n) h is iteratively updated based on the following equation: F(n+1) h = F(n) h −η · d ˜J(F(n) h ) dF(n) h where η is the learning rate. To this end, we present the learning algorithm in Algorithm 1. Algorithm 1 Learning Algorithm 1: Initialize Wh and Sh based on the initial prediction model; 2: Randomly initialize F(0) h ; 3: Compute Σ−1 h based on entity similarities; 4: Initialize counter n = 1; 5: for each linguistic rule ci ∈C do 6: Estimate γi over the training set; 7: end for 8: while ∥F(n) h −F(n+1) h ∥2 < 10−3 do 9: Compute R(n) h based on C and F(n) h ; 10: Calculate d ˜J(F(n) h ) dF(n) h = W2 h(F(n) h −Sh) + (F(n) h −R(n) h ) + µ1Σ−1 h F(n) h + µ2F(n) h ; 11: Compute F(n+1) h for the next iteration: F(n+1) h = F(n) h −η · d ˜J(F(n) h ) dF(n) h ; 12: Update counter n = n + 1; 13: end while 14: return Final prediction vector F(n+1) h ; The runtime complexity of this algorithm is O(P h∈Dh th|Dh|2) where th is the number of iterations to solve the subproblem over Dh. Although we do not know the upper bounds on the numbers of iterations of these two learning techniques, the runtime complexity can be reduced by blockwise gradient descent for two reasons: i) P h∈Dh |Dh| ≤m and ii) th has a large probability to be smaller than t due to the smaller number of data instances. This technique can be also viewed as optimizing Eq. (2) based on blockwise matrix computation. Finally, for each (xi, yi) ∈DU, we predict that yi is a hypernym of xi if Fi > θ where θ ∈(−1, 1) is a threshold tuned on the development set. 4 Experiments In this section, we conduct experiments to evaluate our method. Section 4.1 to Section 4.5 report the experimental steps on Chinese datasets. We present the performance on English datasets in Section 4.6 and a discussion in Section 4.7. 4.1 Experimental Data We have two collections of Chinese word/phase pairs as ground truth datasets. Each pair is labeled with an is-a or not-is-a tag. The first one (denoted as FD) is from Fu et al. (2014), containing 1,391 is-a pairs and 4,294 not-is-a pairs, which is the first publicly available dataset to evaluate this task. The second one (denoted as BK) is larger in size and crawled from Baidu Baike by ourselves, consisting of <entity, category> pairs. For each pair in BK, we ask multiple human annotators to label the tag and discard the pair with inconsistent labels by different annotators. In total, it contains 3,870 is-a pairs and 3,582 not-is-a pairs4. The Chinese text corpus is extracted from the contents of 1.2M entity pages from Baidu Baike5, a Chinese online encyclopedia. It contains approximately 1.1B words. We use the open source toolkit Ansj6 for Chinese word segmentation. Chinese words/phrases in our test sets may consist of multiple Chinese characters. We treat such word/phrase as a whole to learn embeddings, instead of using character-level embeddings. In the following experiments, we use 60% of the data for training, 20% for development and 20% for testing, partitioned randomly. By rotating the 5-fold subsets of the datasets, we report the performance of each method on average. 4.2 Parameter Analysis The word embeddings are pre-trained by ourselves on the Chinese corpus. In total, we obtain the 100dimensional embedding vectors of 5.8M distinct words. The regularization parameters are set to λ = 10−3 and µ1 = µ2 = 10−4, fine tuned on the development set. The choice of θ reflects the precision-recall trade-off in our model. A larger value of θ means we pay more attention to precision rather than recall. Figure 2 illustrates the precision-recall curves 4https://chywang.github.io/data/acl17.zip 5https://baike.baidu.com/ 6https://github.com/NLPchina/ansj seg/ 1399 Dataset FD BK Method P R F P R F Fu et al. (2014) (S) 64.1 56.0 59.8 71.4 64.8 67.9 Fu et al. (2014) (P) 66.4 59.3 62.6 72.7 67.5 70.0 Li et al. (2015) 54.3 38.4 45.0 61.2 47.5 53.5 Mirza and Tonelli (2016) (C) 67.7 75.2 69.7 80.3 75.9 78.0 Mirza and Tonelli (2016) (A) 65.3 60.7 62.9 72.7 65.6 68.9 Mirza and Tonelli (2016) (S) 71.9 60.6 65.7 78.4 60.7 68.4 Wang and He (2016) 69.3 64.5 66.9 73.9 69.8 71.8 Ours (Initial) 70.7 69.2 69.9 81.7 78.5 80.0 Ours 72.8 70.5 71.6 83.6 80.6 82.1 Table 2: Performance comparison on test sets for Chinese hypernym prediction (%). 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Recall Precision (a) Dataset: FD 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Recall Precision (b) Dataset: BK Figure 2: Precision-recall curve with respect to the tuning of θ on development sets. on both datasets. It can be seen that the performance of our method is generally better in BK than FD. The most probable cause is that BK is a large dataset with more “balanced” numbers of positive and negative data. Finally, θ is set to 0.05 on FD and 0.1 on BK. 4.3 Performance In a series of previous work (Fu et al., 2013, 2014; Wang and He, 2016), several pattern-based, inference-based and encyclopedia-based is-a relation extraction methods for English have been implemented for the Chinese language. As their experiments show, these methods achieve the Fmeasure of lower than 60% in most cases, which are not suggested to be strong baselines for Chinese hypernym prediction. Interested readers may refer to their papers for the experimental results. To make the convincing conclusion, we employ two recent state-of-the-art approaches for Chinese is-a relation identification (Fu et al., 2014; Wang and He, 2016) as baselines. We also take the word embedding based classification approach (Mirza and Tonelli, 2016)7 and Chinese Wikipedia based 7Although the experiments in their paper are mostly related to temporal relations, the method can be applied to is-a SVM model (Li et al., 2015) as baselines to predict is-a relations between words8. The experimental results are illustrated in Table 2. For Fu et al. (2014), we test the performance using a linear projection model (denoted as S in Table 2) and piecewise projection models (P). It shows that the semantics of is-a relations are better modeled by multiple projection models, with a slight improvement in F-measure. By combining iterative projection models and pattern-based validation, the most recent approach (Wang and He, 2016) increases the F-measure by 4% and 2% in two datasets. In this method, the patternbased statistics are calculated using the same corpus over which we train word embedding models. The main reason of the improvement may be that the projection models have a better generalization power by applying an iterative learning paradigm. Mirza and Tonelli (2016) is implemented using three different strategies in combining the word vectors of a pair: i) concatenation xi ⊕yi (derelations without modification. 8Previously, these methods used different knowledge sources to train models and thus the results in their papers are not directly comparable with ours. To make fair comparison, we take the training data as the same knowledge source to train models for all methods. 1400 Candidate Hypernym P T Candidate Hypernym P T Entity: 乙烯(Ethylene) Entity: 孙燕姿(Stefanie Sun) 化学品(Chemical) √ √ 歌手(Singer) √ √ 有机化学(Organic Chemistry) × × 明星(Star) √ √ 有机物(Organics) √ √ 人物(Person) √ √ 气体(Gas) √ √ 金曲奖(Golden Melody Award) √ × 自然科学(Natural Science) × × 音乐人(Musician) √ √ Entity: 显卡(Graphics Card) Entity: 核反应堆(Nuclear Reactor) 硬件(Hardware) √ √ 建筑学(Architecture) × × 电子产品(Electronic Product) √ √ 核科学(Nuclear Science) × × 电脑硬件(Computer Hardware) √ √ 核能(Nuclear Energy) √ × 数码(Digit) × × 自然科学(Natural Science) × × Table 3: Examples of model prediction. (P: prediction result, T: ground truth, √: positive, ×: negative) TP/TN Rate Rule P1 Rule N1 Rule N2 Dataset FD 98.6 92.3 94.1 Dataset BK 97.6 96.8 97.3 Table 4: TP/TN rates of three linguistic rules (%). noted as C), ii) addition xi + yi (A) and iii) subtraction xi −yi (S). As seen, the classification models using addition and subtraction have similar performance in two datasets, while the concatenation strategy outperforms previous two approaches. Although Li et al. (2015) achieve a high performance in their dataset, this method does not perform well in ours. The most likely cause is that the features in that work are designed specifically for the Chinese Wikipedia category system. Our initial model has a higher accuracy than all the baselines. By utilizing the transductive learning framework, we boost the F-measure by 1.7% and 2.1%, respectively. Therefore, our method is effective to predict hypernyms of Chinese entities. We further conduct statistical tests which show our method significantly (p < 0.01) improves the Fmeasure over the state-of-the-art method (Wang and He, 2016). 4.4 Effectiveness of Linguistic Rules To illustrate the effectiveness of linguistic rules, we present the true positive (or negative) rate by using one positive (or negative) rule solely, shown in Table 4. These values serve as γis in the transductive learning stage. The results indicate that these rules have high precision (over 90%) over both datasets for our task. We state that currently we only use a few handcraft linguistic rules in our work. The proposed approach is a general framework that can encode arbitrary numbers of rules and in any language. 4.5 Error Analysis and Case Studies We analyze correct and error cases in the experiments. Some examples of prediction results are shown in Table 3. We can see that our method is generally effective. However, some mistakes occur mostly because it is difficult to distinguish strict is-a and topic-of relations. For example, the entity Nuclear Reactor is semantically close to Nuclear Energy. The error statistics show that such kind of errors account for approximately 80.2% and 78.6% in two test sets, respectively. Based on the literature study, we find that such problem has been also reported in (Fu et al., 2013; Wang and He, 2016). To reduce such errors, we employ the Chinese thematic lexicon based on Li et al. (2015) in the transductive learning stage but the coverage is still limited. Two possible solutions are: i) adding more negative training data of this kind; and ii) constructing a large-scale thematic lexicon automatically from the Web. 4.6 Experiments on English Datasets To examine how our method can benefit hypernym prediction for the English language, we use two standard datasets in this paper. The first one is a benchmark dataset for distributional semantic evaluation, i.e., BLESS (Baroni and Lenci, 2011). Because the number of pairs in BLESS is relatively small, we also use the Shwartz (Shwartz et al., 2016) dataset. In the experiments, we treat the HYPER relations as positive data (1,337 pairs) and randomly sample 30% of the RANDOM relations as negative data (3,754 pairs) in BLESS. To create a relatively balanced dataset, we take the random split of Shwartz as input and use only 30% of the negative pairs. The dataset contains 14,135 positive pairs and 16,956 negative pairs. We use English Wikipedia as the text corpus to estimate the 1401 Dataset BLESS Shwartz Method P R F P R F Lenci and Benotto (2012) 42.8 38.6 40.6 38.5 50.1 43.5 Santus et al. (2014) 59.2 52.3 55.4 51.2 71.5 59.6 Fu et al. (2014) (S) 65.3 62.4 63.8 65.6 66.1 65.8 Fu et al. (2014) (P) 68.1 64.2 66.1 62.3 71.9 67.3 Mirza and Tonelli (2016) (C) 79.4 84.1 81.7 79.3 80.9 80.1 Mirza and Tonelli (2016) (A) 80.7 72.3 76.3 79.1 79.6 79.4 Mirza and Tonelli (2016) (S) 78.0 81.2 79.6 80.5 77.5 79.0 Wang and He (2016) 76.2 75.4 75.8 75.1 76.3 75.6 Ours (Initial) 79.3 76.3 77.7 77.2 76.8 77.0 Ours 84.4 79.5 81.9 79.1 77.5 78.3 Table 5: Performance comparison on test sets for English hypernym prediction (%). statistics, and the pre-trained embedding vectors of English words9. For comparison, we test all the baselines over English datasets except Li et al. (2015). This is because most features in Li et al. (2015) can only be used in the Chinese environment. To implement Wang and He (2016) for English, we use the original Hearst patterns (Hearst, 1992) to perform relation selection and do not consider not-is-a patterns. We also take two recent DSM based approaches (Lenci and Benotto, 2012; Santus et al., 2014) as baselines. As for our own method, we do not use linguistic rules in Table 1 for English. The results are illustrated in Table 5. As seen, our method is superior to all the baselines over BLESS, with an F-measure of 81.9%. In Shwartz, while the approach (Mirza and Tonelli, 2016) has the highest F-measure of 80.1%, our method is generally comparable to theirs and outperforms others. The results suggest that although our method is not necessarily the state-of-the-art for English hypernym prediction, it has several potential applications. Refer to Section 4.7 for discussion. 4.7 Discussion From the experiments, we can see that the proposed approach outperforms the state-of-the-art methods for Chinese hypernym prediction. Although the English language is not our focus, our approach still has relatively high performance. Additionally, our work has potential values for the following applications: • Domain-specific or Context-sparse Relation Extraction. If the task is to predict re9http://nlp.stanford.edu/projects/glove/ lations between words when it is related to a specific domain or the contexts are sparse, even for English, traditional pattern-based methods are likely to fail. Our method can predict the existence of relations without explicit textual patterns and requires a relatively small amount of pairs as training data. • Under-resourced Language Learning. Our method can be adapted for relation extraction in languages with flexible expressions, few knowledge resources and/or lowperformance NLP tools. Our method does not require deep NLP parsing of sentences in a text corpus and thus the performance is not affected by parsing errors. 5 Conclusion In summary, this paper introduces a transuctive learning approach for Chinese hypernym prediction. By modeling linear projection models, linguistic rules and non-linear mappings, our method is able to identify Chinese hypernyms with high accuracy. Experiments show that the performance of our method outperforms previous approaches. We also discuss the potential applications of our method besides Chinese hypernym prediction. In our work, the candidate Chinese hyponyms and hypernyms are extracted from user generated categories. In the future, we will study how to construct a taxonomy from texts in Chinese. Acknowledgements This work is supported by the National Key Research and Development Program of China under Grant No. 2016YFB1000904. 1402 References Marco Baroni and Alessandro Lenci. 2011. How we blessed distributional semantic evaluation. In Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics. pages 1—-10. Rahul Bhagat, Patrick Pantel, and Eduard H. Hovy. 2007. LEDIR: an unsupervised algorithm for learning directionality of inference rules. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. pages 161–170. Sharon A. Caraballo. 1999. Automatic construction of a hypernym-labeled noun hierarchy from text. In 27th Annual Meeting of the Association for Computational Linguistics. Andrew Carlson, Justin Betteridge, Richard C. Wang, Estevam R. Hruschka Jr., and Tom M. Mitchell. 2010. Coupled semi-supervised learning for information extraction. In Proceedings of the Third International Conference on Web Search and Web Data Mining. pages 101–110. Olivier Chapelle, Bernhard Sch¨olkopf, and Alexander Zien. 2006. Transductive Inference and SemiSupervised Learning. MIT Press. Oren Etzioni, Michael J. Cafarella, Doug Downey, Stanley Kok, Ana-Maria Popescu, Tal Shaked, Stephen Soderland, Daniel S. Weld, and Alexander Yates. 2004. Web-scale information extraction in knowitall: (preliminary results). In Proceedings of the 13th international conference on World Wide Web. pages 100–110. Ruiji Fu, Jiang Guo, Bing Qin, Wanxiang Che, Haifeng Wang, and Ting Liu. 2014. Learning semantic hierarchies via word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. pages 1199–1209. Ruiji Fu, Bing Qin, and Ting Liu. 2013. Exploiting multiple sources for open-domain hypernym discovery. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. pages 1224–1234. Alexander Gammerman, Katy S. Azoury, and Vladimir Vapnik. 1998. Learning by transduction. In Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence. pages 148–155. Gene H. Golub, Per Christian Hansen, and Dianne P. O’Leary. 1999. Tikhonov regularization and total least squares. SIAM J. Matrix Analysis Applications 21(1):185–194. Marti A. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In 14th International Conference on Computational Linguistics. pages 539–545. Chu-Ren Huang, Ru-Yng Chang, and Hshiang-Pin Lee. 2004. Sinica BOW (bilingual ontological wordnet): Integration of bilingual wordnet and SUMO. In Proceedings of the Fourth International Conference on Language Resources and Evaluation. Lili Kotlerman, Ido Dagan, Idan Szpektor, and Maayan Zhitomirsky-Geffet. 2010. Directional distributional similarity for lexical inference. Natural Language Engineering 16(4):359–389. Zornitsa Kozareva and Eduard H. Hovy. 2010. Learning arguments and supertypes of semantic relations using recursive patterns. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. pages 1482–1491. Alessandro Lenci and Giulia Benotto. 2012. Identifying hypernyms in distributional semantic spaces. In Proceedings of the Sixth International Workshop on Semantic Evaluation. pages 543–546. Hai-Guang Li, Xindong Wu, Zhao Li, and Gong-Qing Wu. 2013. A relation extraction method of chinese named entities based on location and semantic features. Appl. Intell. 38(1):1–15. Jinyang Li, Chengyu Wang, Xiaofeng He, Rong Zhang, and Ming Gao. 2015. User generated content oriented chinese taxonomy construction. In Web Technologies and Applications - 17th Asia-Pacific Web Conference. pages 623–634. Hanxiao Liu and Yiming Yang. 2015. Bipartite edge prediction via transductive learning over product graphs. In Proceedings of the 32nd International Conference on Machine Learning. pages 1880– 1888. Weiming Lu, Renjie Lou, Hao Dai, Zhenyu Zhang, Shansong Yang, and Baogang Wei. 2015. Taxonomy induction from chinese encyclopedias by combinatorial optimization. In Proceedings of the 4th CCF Conference on Natural Language Processing and Chinese Computing. pages 299–312. Anil Menon, Kishan Mehrotra, Chilukuri K. Mohan, and Sanjay Ranka. 1996. Characterization of a class of sigmoid functions with applications to neural networks. Neural Networks 9(5):819–835. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. CoRR abs/1301.3781. George A. Miller. 1995. Wordnet: a lexical database for english. Communications of the Acm 38(11):39– 41. Paramita Mirza and Sara Tonelli. 2016. On the contribution of word embeddings to temporal relation classification. In Proceedings of the 26th International Conference on Computational Linguistics. pages 2818–2828. 1403 Patrick Pantel and Marco Pennacchiotti. 2006. Espresso: Leveraging generic patterns for automatically harvesting semantic relations. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics. Simone Paolo Ponzetto and Michael Strube. 2007. Deriving a large-scale taxonomy from wikipedia. In Proceedings of the Twenty-Second AAAI Conference on Artificial Intelligence. pages 1440–1445. Xiang Ren, Wenqi He, Meng Qu, Lifu Huang, Heng Ji, and Jiawei Han. 2016. AFET: automatic finegrained entity typing by hierarchical partial-label embedding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. pages 1369–1378. Alan Ritter, Stephen Soderland, and Oren Etzioni. 2009. What is this, anyway: Automatic hypernym discovery. In Learning by Reading and Learning to Read, the 2009 AAAI Spring Symposium. pages 88– 93. Erik F. Tjong Kim Sang. 2007. Extracting hypernym pairs from the web. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics. Erik F. Tjong Kim Sang and Katja Hofmann. 2009. Lexical patterns or dependency patterns: Which is better for hypernym extraction? In Proceedings of the Thirteenth Conference on Computational Natural Language Learning. pages 174–182. Enrico Santus, Alessandro Lenci, Qin Lu, and Sabine Schulte im Walde. 2014. Chasing hypernyms in vector spaces with entropy. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics. pages 38–42. Vered Shwartz, Yoav Goldberg, and Ido Dagan. 2016. Improving hypernymy detection with an integrated path-based and distributional method. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Rion Snow, Daniel Jurafsky, and Andrew Y. Ng. 2004. Learning syntactic patterns for automatic hypernym discovery. In Advances in Neural Information Processing Systems 17, NIPS 2004. pages 1297–1304. Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowledge. In Proceedings of the 16th International Conference on World Wide Web. pages 697–706. Idan Szpektor, Eyal Shnarch, and Ido Dagan. 2007. Instance-based evaluation of entailment rule acquisition. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics. page 456–463. Chengyu Wang, Ming Gao, Xiaofeng He, and Rong Zhang. 2015. Challenges in chinese knowledge graph construction. In Proceedings of the 31st IEEE International Conference on Data Engineering Workshops. pages 59–61. Chengyu Wang and Xiaofeng He. 2016. Chinese hypernym-hyponym extraction from user generated categories. In Proceedings of the 26th International Conference on Computational Linguistics. pages 1350–1361. Shan Wang and Francis Bond. 2013. Cbuilding the chinese open wordnet (cow): Starting from core synsets. In Proceedings of the 11th Workshop on Asian Language Resources: ALR-2013 a Workshop of The 6th International Joint Conference on Natural Language Processing. pages 10–18. Wentao Wu, Hongsong Li, Haixun Wang, and Kenny Qili Zhu. 2012. Probase: a probabilistic taxonomy for text understanding. In Proceedings of the ACM SIGMOD International Conference on Management of Data. pages 481–492. Renjie Xu, Zhiqiang Gao, Yingji Pan, Yuzhong Qu, and Zhisheng Huang. 2008. An integrated approach for automatic construction of bilingual chinese-english wordnet. In The Semantic Web, Proceedings of the 3rd Asian Semantic Web Conference. pages 302– 314. Ruochen Xu, Yiming Yang, Hanxiao Liu, and Andrew Hsi. 2016. Cross-lingual text classification via model translation with limited dictionaries. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. pages 95–104. 1404
2017
128
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1405–1414 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1129 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1405–1414 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1129 A Constituent-Centric Neural Architecture for Reading Comprehension Pengtao Xie*† and Eric P. Xing† *Machine Learning Department, Carnegie Mellon University †Petuum Inc. [email protected], [email protected] Abstract Reading comprehension (RC), aiming to understand natural texts and answer questions therein, is a challenging task. In this paper, we study the RC problem on the Stanford Question Answering Dataset (SQuAD). Observing from the training set that most correct answers are centered around constituents in the parse tree, we design a constituent-centric neural architecture where the generation of candidate answers and their representation learning are both based on constituents and guided by the parse tree. Under this architecture, the search space of candidate answers can be greatly reduced without sacrificing the coverage of correct answers and the syntactic, hierarchical and compositional structure among constituents can be well captured, which contributes to better representation learning of the candidate answers. On SQuAD, our method achieves the state of the art performance and the ablation study corroborates the effectiveness of individual modules. 1 Introduction Reading comprehension (RC) aims to answer questions by understanding texts, which is a challenge task in natural language processing. Various RC tasks and datasets have been developed, including Machine Comprehension Test (Richardson et al., 2013) for multiple-choice question answering (QA) (Sachan et al., 2015; Wang and McAllester, 2015), Algebra (Hosseini et al., 2014) and Science (Clark and Etzioni, 2016) for passing standardized tests (Clark et al., 2016), CNN/Daily Mail (Hermann et al., 2015) and Children’s Book Test (Hill et al., 2015) for cloze-style The most authoritative account at the time came from the medical faculty in Paris in a report to the king of France that blamed the heavens. This report became the first and most widely circulated of a series of plague tracts that sought to give advice to sufferers. That the plague was caused by bad air became the most widely accepted theory. Today, this is known as the Miasma theory. 1. Who was the medical report written for? the king of France 2. What is the newer, more widely accepted theory behind the spread of the plague? bad air 3. What is the bad air theory officially known as? Miasma theory Figure 1: An example of the SQuAD QA task QA (Chen et al., 2016; Shen et al., 2016), WikiQA (Yang et al., 2015), Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016) and Microsoft Machine Reading Comprehension (Nguyen et al., 2016) for open domain QA. In this paper, we are specifically interested in solving the SQuAD QA task (Figure 1 shows an example), in light of its following features: (1) large scale: 107,785 questions, 23,215 paragraphs; (2) nonsynthetic: questions are generated by crowdworkers; (3) large search space of candidate answers. We study two major problems: (1) how to generate candidate answers? Unlike in multiplechoice QA and cloze-style QA where a small amount of answer choices are given, an answer in SQuAD could be any span in the text, resulting in a large search space with size O(n2) (Rajpurkar et al., 2016), where n is the number of words in the sentence. This would incur a lot of noise, ambigu1405 ity and uncertainty, making it highly difficult to pick up the correct answer. (2) how to effectively represent the candidate answers? First, long-range semantics spanning multiple sentences need to be captured. As noted in (Rajpurkar et al., 2016), the answering of many questions requires multiplesentence reasoning. For instance, in Figure 1, the last two sentences in the passages are needed to answer the third question. Second, local syntactic structure needs to be incorporated into representation learning. The study by (Rajpurkar et al., 2016) shows that syntax plays an important role in SQuAD QA: there are a wide range of syntactic divergence between a question and the sentence containing the answer; the answering of 64.1% questions needs to deal with syntactic variation; experiments show that syntactic features are the major contributing factors to good performance. To tackle the first problem, motivated by the observation in (Rajpurkar et al., 2016) that the correct answers picked up by human are not arbitrary spans, but rather centered around constituents in the parse tree, we generate candidate answers based upon constituents, which significantly reduces the search space. Different from (Rajpurkar et al., 2016) who only consider exact constituents, we adopt a constituent expansion mechanism which greatly improves the coverage of correct answers. For the representation learning of candidate answers which are sequences of constituents, we first encode individual constituents using a chainof-trees LSTM (CT-LSTM) and tree-guided attention mechanism, then feed these encodings into a chain LSTM (Hochreiter and Schmidhuber, 1997) to generate representations for the constituent sequences. The CT-LSTM seamlessly integrates intra-sentence tree LSTMs (Tai et al., 2015) which capture the local syntactic properties of constituents and an inter-sentence chain LSTM which glues together the sequence of tree LSTMs such that the semantics of each sentence can be propagated to others. The tree-guided attention leverages the hierarchical relations among constituents to learn question-aware representations. Putting these pieces together, we design a constituent-centric neural network (CCNN), which contains four layers: a chain-of-trees LSTM encoding layer, a tree-guided attention layer and a candidate-answer generation layer, a prediction layer. Evaluation on SQuAD demonstrates the ef0 20 40 60 80 0 1 2 3 4 5 6 7 8 >8 Percentage of Answers (%) Number of different words N Figure 2: Percentage of answers that differ from their closest constituents by N words That  the  plague  was  caused  by  bad  air   became  the  most  widely  accepted  theory.   Today,  this  is  known  as  the  Miasma  theory. Chain-­‐of-­‐Trees LSTM for Passage Encoding What  is  the  bad  air  theory  officially  known as? Tree LSTM for Question Encoding Tree-­‐Guided Attention Encoder Candidate Answer Generation the  plague bad  air …… is  known  as Miasma  theory Answer Prediction Miasma  theory Passage Question Candidate Answers Correct Answer Figure 3: Constituent-centric neural network. fectiveness of CCNN. 2 Constituent-Centric Neural Network for Reading Comprehension 2.1 Overall Architecture As observed in (Rajpurkar et al., 2016), almost all correct answers are centered around the constituents. To formally confirm this, we compare the correct answers in the training set with constituents generated by the Stanford parser (Manning et al., 2014): for each correct answer, we find its “closest” constituent – the longest constituent that is a substring of the answer, and count how many words they differ from (let N denote this number). Figure 2 shows the percentage of answers whose N equals to 0, · · · , 8 and N > 8. As can be seen, ∼70% answers are exactly constituents (N = 0) and ∼97% answers differ from the closest constituents by less equal to 4 words. This observation motivates us to approach the 1406 reading comprehension problem in a constituentcentric manner, where the generation of candidate answers and their representation learning are both based upon constituents. Specifically, we design a Constituent-Centric Neural Network (CCNN) to perform end-to-end reading comprehension, where the inputs are the passage and question, and the output is a span in the passage that is mostly suitable to answer this question. As shown in Figure 3, the CCNN contains four layers. In the encoding layer, the chainof-trees LSTM and tree LSTM encode the constituents in the passage and question respectively. The encodings are fed to the tree-guided attention layer to learn question-aware representations, which are passed to the candidate-answer generation layer to produce and encode the candidate answers based on constituent expansion. Finally, the prediction layer picks up the best answer from the candidates using a feed-forward network. 2.2 Encoding Given the passages and questions, we first use the Stanford parser to parse them into constituent parse trees, then the encoding layer of CCNN learns representations for constituents in questions and passages, using tree LSTM (Tai et al., 2015) and chain-of-trees LSTM respectively. These LSTM encoders are able to capture the syntactic properties of constituents and long-range semantics across multiple sentences, which are crucial for SQuAD QA. 2.2.1 Tree LSTM for Question Encoding Each question is a single sentence, having one constituent parse tree. Internal nodes in the tree represent constituents having more than one word and leaf nodes represent single-word constituent. Inspired by (Tai et al., 2015; Teng and Zhang, 2016), we build a bi-directional tree LSTM which consists of a bottom-up LSTM and a top-down LSTM, to encode these constituents (as shown in Figure 4). Each node (constituent) has two hidden states: h↑produced by the LSTM in bottomup direction and h↓produced by the LSTM in top-down direction. Let T denote the maximum number of children an internal node could have. For each particular node, let L (0 ≤L ≤T) be the number of children it has, h(l) ↑and c(l) ↑be the bottom-up hidden state and memory cell of the lth (1 ≤l ≤L) child (if any) respectively and h(p) ↓ S N VP V NP D N John hit the ball. S NP VBZ DT NP N The referee young ADJ whistled. …... …... Figure 4: Chain-of-trees LSTM and c(p) ↓ be the top-down hidden state and memory cell of the parent. In the bottom-up LSTM, each node has an input gate i↑, L forget gates {f(l) ↑}L l=1 corresponding to different children, an output gate o↑and a memory cell c↑. For an internal node, the inputs are the hidden states and memory cells of its children and the transition equations are defined as: i↑= σ(PL l=1 W(i,l) ↑ h(l) ↑+ b(i) ↑) ∀l, f(l) ↑ = σ(W(f,l) ↑ h(l) ↑+ b(f,l) ↑ ) o↑= σ(PL l=1 W(o,l) ↑ h(l) ↑+ b(o) ↑) u↑= tanh(PL l=1 W(u,l) ↑ h(l) ↑+ b(u) ↑) c↑= i↑⊙u↑+ PL l=1 f(l) ↑ ⊙c(l) ↑ h↑= o↑⊙tanh(c↑) (1) where the weight parameters W and bias parameters b with superscript l such as W(i,l) ↑ are specific to the l-th child. For a leaf node which represents a single word, it has no forget gate and the input is the wording embedding (Pennington et al., 2014) of this word. In the top-down direction, the gates, memory cell and hidden state are defined in a similar fashion as the bottom-up direction (Eq.(1)). For an internal node except the root, the inputs are the hidden state h(p) ↓ and memory cell c(p) ↓ of its parents. For a leaf node, in addition to h(p) ↓ and c(p) ↓, the inputs also contain the word embedding. For the root node, the top-down hidden state h(r) ↓ is set to its bottom-up hidden state h(r) ↑. h(r) ↑ captures the semantics of all constituents, which is then replicated as hr ↓and propagated downwards to each individual constituent. Concatenating the hidden states of two directions, we obtain the LSTM encoding for each node 1407 h = [h↑; h↓] which will be the input of the attention layer. The bottom-up hidden state h↑composes the semantics of sub-constituents contained in this constituent and the top-down hidden state h↓captures the contextual semantics manifested in the entire sentence. 2.2.2 Chain-of-Trees LSTM for Passage Encoding To encode the passage which contains multiple sentences, we design a chain-of-trees LSTM (Figure 4). A bi-directional tree LSTM is built for each sentence to capture the local syntactic structure and these tree LSTMs are glued together via a bi-directional chain LSTM (Graves et al., 2013) to capture long-range semantics spanning multiple sentences. The hidden states generated by the bottom-up tree LSTM serves as the input of the chain LSTM. Likewise, the chain LSTM states are fed to the top-down tree LSTM. This enables the encoding of every constituent to be propagated to all other constituents in the passage. In the chain LSTM, each sentence t is treated as a unit. The input of this unit is generated by the tree LSTM of sentence t, which is the bottom-up hidden state h↑t at the root. Sentence t is associated with a forward hidden state −→ h t and a backward state ←− h t. In the forward direction, the transition equations among the input gate −→i t, forget gate −→f t, output gate −→o t and memory cell −→c t are: −→i t = σ(−→ W(i)h↑t + −→ U(i)−→ h t−1 + −→ b (i)) −→f t = σ(−→ W(f)h↑t + −→ U(f)−→ h t−1 + −→ b (f)) −→o t = σ(−→ W(o)h↑t + −→ U(o)−→ h t−1 + −→ b (o)) −→ u t = tanh(−→ W(u)h↑t + −→ U(u)−→ h t−1 + −→ b (u)) −→c t = −→i t ⊙−→ u t + −→f t ⊙−→c t−1 −→ h t = −→o t ⊙tanh(−→c t) (2) The backward LSTM is defined in a similar way. Subsequently, −→ h t and ←− h t, which encapsulate the semantics of all sentences, are inputted to the root of the top-down tree LSTM and propagated to all the constituents in sentence t. To sum up, the CT-LSTM encodes a passage in the following way: (1) the bottom-up tree LSTMs compute hidden states h↑for each sentence and feed h↑of the root node into the chain LSTM; (2) the chain LSTM computes forward and backward states and feed them into the root of the top-down tree LSTMs; (3) the top-down tree LSTMs compute hidden states h↓. At each constituent C, the bottom-up state h↑captures the semantics of subconstituents in C and the top-down state h↓captures the semantics of the entire passage. 2.3 Tree-Guided Attention Mechanism We propose a tree-guided attention (TGA) mechanism to learn a question-aware representation for each constituent in the passage, which consists of three ingredients: (1) constituent-level attention score computation; (2) tree-guided local normalization; (3) tree-guided attentional summarization. Given a constituent h(p) in the passage, for each constituent h(q) in the question, an unnormalized attention weight score a is computed as a = h(p) · h(q) which measures the similarity between the two constituents. Then we perform a tree-guided local normalization of these scores. At each internal node in the parse tree, where the unnormalized attention scores of its L children are {al}L l=1, a local normalization is performed using a softmax operation eal = exp(al)/ PL m=1 exp(am) which maps these scores into a probabilistic simplex. This normalization scheme stands in contrast with the global normalization adopted in word-based attention (Wang and Jiang, 2016; Wang et al., 2016), where a single softmax is globally applied to the attention scores of all the words in the question. Given these locally normalized attention scores, we merge the LSTM encodings of constituents in the question into an attentional representation in a recursive and bottom-up way. At each internal node, let h be its LSTM encoding, a and {al}L l=1 be the normalized attention scores of this node and its L children, and {bl}L l=1 be the attentional representations (which we will define later) generated at the children, then the attentional representation b of this node is defined as: b = a(h + L X l=1 albl) (3) which takes the weighted representation PL l=1 albl contributed from its children, adds in its own encoding h, then performs a re-weighting using the attention score a. The attentional representation b(r) at the root node acts as the final summarization of constituents in the question. We concatenate it to the LSTM encoding h(p) of the passage constituent and obtain a concatenated representation z = [h(p); b(r)] which will be the input of the candidate answer generation layer. 1408 C5 It C4 C2 C3 came from C1 in the medical faculty Paris Expansion  of  C1  (“the  medical  faculty”) 1. C1 2. from  the  medical  faculty    èC2 3. came  from  the  medical  faculty  ècame  C2 4. the  medical  faculty  in  èC1  in 5. the medical  faculty  in  Paris  èC1 C3 6. from  the  medical  faculty    in  èC2 in 7. from  the  medical  faculty  in  Paris  èC2 C3 8. came  from  the  medical  faculty  in  ècame C2  in 9. came  from  the  medical  faculty  in  Paris  èC4 came C2 in Figure 5: Constituent expansion. (Left) Parse tree of a sentence in the passage. (Top Right) Expansions of constituent C1 and their reductions (denoted by arrow). (Bottom Right) Learning the representation of an expansion using bidirectional chain-LSTM. Unlike the word-based flat-structure attention mechanism (Wang and Jiang, 2016; Wang et al., 2016) where the attention scores are computed between words and normalized using a single global softmax, and the attentional summary is computed in a flat manner, the tree-guided attention calculates attention scores between constituents, normalizes them locally at each node in the parse tree and computes the attentional summary in a hierarchical way. Tailored to the parse tree, TGA is able to capture the syntactic, hierarchical and compositional structures among constituents and arguably generate better attentional representations, as we will validate in the experiments. 2.4 Candidate Answer Generation As shown in Figure 2, while most correct answers in the training set are exactly constituents, some of them are not the case. To cover the non-constituent answers, we propose to expand each constituent by appending words adjacent to it. Let C denote a constituent and S = “ · · · wi−1wiCwjwj+1 · · · ” be the sentence containing C. We expand C by appending words preceding C (such as wi−1 and wi) and words succeeding C (such as wj and wj+1) to C. We define an (l, r)-expansion of a constituent C as follows: append l words preceding C in the sentence to C; append r words succeeding C to C. Let M be the maximum expansion number that l ≤M and r ≤M. Figure 5 shows an example. On the left is the constituent parse tree of the sentence “it came from the medical faculty in Paris”. On the upper right are the expansions of the constituent C1 – “the medical faculty”. To expand this constituent, we trace it back to the sentence and look up the M (M=2 in this case) words preceding C1 (which are “came” and “from”) and succeeding C1 (which are “in” and “Paris”). Then combinations of C1 and the preceding/succeeding words are taken to generate constituent expansions. On both the left and right side of C1, we have three choices of expansion: expanding 0,1,2 words. Taking combination of these cases, we obtain 9 expansions, including C1 itself ((0, 0)-expansion). The next step is to perform reduction of constituent expansions. Two things need to be reduced. First, while expanding the current constituent, new constituents may come into being. For instance, in the expansion “came from C1 in Paris”, “in” and “Paris” form a constituent C3; “from” and C1 form a constituent C2; “came”, C2 and C3 form a constituent C4. Eventually, this expansion is reduced to C4. Second, the expansions generated from different constituents may have overlap and the duplicated expansions need to be removed. For example, the (2, 1)-expansion of C1 – “came from the medical faculty in” – can be reduced to “came C2 in”, which is the (1, 1)expansion of C2. After reduction, each expansion is a sequence of constituents. Next we encode these candidate answers and the encodings will be utilized in the prediction layer. In light of the fact that each expansion is a constituent sequence, we build a bi-directional chain LSTM (Figure 5, bottom right) to synthesize the representations of individual constituents therein. Let E = C1 · · · Cn be an expansion consisting of n constituents. In the chain LSTM, the input of unit i is the combined representation of Ci. We concatenate the forward hidden state at Cn and backward state at C1 as the final representation of E. 2.5 Answer Prediction and Parameter Learning Given the representation of candidate answers, we use a feed-forward network f : Rd →R to predict the correct answer. The input of the network is the feature vector of a candidate answer and the output is a confidence score. The one with the largest score is chosen as the the correct answer. For parameter learning, we normalize the confidence scores into a probabilistic simplex using softmax and define a cross entropy loss thereupon. Let Jk be the number of candidate answers produced from the k-th passage-question pair and 1409 {z(k) j }Jk j=1 be their representations. Let tk be the index of the correct answer. Then the cross entropy loss of K pairs is defined as K X k=1 (−f(ztk) + log Jk X j=1 exp(f(z(k) j ))) (4) Model parameters are learned by minimizing this loss using stochastic gradient descent. 3 Experiments 3.1 Experimental Setup The experiments are conducted on the Stanford Question Answering Dataset (SQuAD) v1.1, which contains 107,785 questions and 23,215 passages coming from 536 Wikipedia articles. The data was randomly partitioned into a training set (80%), a development set (10%) and an unreleased test set (10%). Rajpurkar et al. (2016) build a leaderboard to evaluate and publish results on the test set. Due to software copyright issues, we did not participate this online evaluation. Instead, we use the development set (which is untouched during model training) as test set. In training, if the correct answer is not in the candidate-answer set, we use the shortest candidate containing the correct answer as the target. The Stanford parser is utilized to obtain the constituent parse trees for questions and passages. In the parse tree, any internal node which has one child is merged together with its child. For instance, in “(NP (NNS sufferers))”, the parent “NP” has only one child “(NNS sufferers)”, we merge them into “(NP sufferers)”. We use 300dimensional word embeddings from GloVe (Pennington et al., 2014) to initialize the model. Words not found in GloVe are initialized as zero vectors. We use a feed-forward network with 2 hidden layers (both having the same amount of units) for answer prediction. The activation function is set to rectified linear. Hyperparameters in CCNN are tuned via 5-fold cross validation (CV) on the training set, summarized in Table 1. We use the ADAM (Kingma and Ba, 2014) optimizer to train the model with an initial learning rate 0.001 and a mini-batch size 100. An ensemble model is also trained, consisting of 10 training runs using the same hyperparameters. The performance is evaluated by two metrics (Rajpurkar et al., 2016): (1) exact match (EM) which measures the percentage of predictions that match any one of the ground truth answers exactly; (2) F1 score which measures the average overlap between the prediction and ground truth answer. In the development set each question has about three ground truth answers. F1 scores with the best matching answers are used to compute the average F1 score. 3.2 Results Table 2 shows the performance of our model and previous approaches on the development set. CCNN (single model) achieves an EM score of 69.3% and an F1 score of 78.5%, significantly outperforming all previous approaches (single model). Through ensembling, the performance of CCNN is further improved and outperforms the baseline ensemble methods. The key difference between our method and previous approaches is that CCNN is constituent-centric where the generation and encoding of candidate answers are both based on constituents while the baseline approaches are mostly word-based where the candidate answer is an arbitrary span of words and the encoding is performed over individual words rather than at the constituent level. The constituent-centric model-design enjoys two major benefits. First, restricting the candidate answers from arbitrary spans to neighborhoods around the constituents greatly reduces the search space, which mitigates the ambiguity and uncertainty in picking up the correct answer. Second, the tree LSTMs and tree-guided attention mechanism encapsulate the syntactic, hierarchical and compositional structure among constituents, which leads to better representation learning of the candidate answers. We conjecture these are the primary reasons that CCNN outperforms the baselines and provide a validation in the next section. 3.3 Ablation Study To further understand the individual modules in CCNN, we perform an ablation study. The results are shown in Table 2. Tree LSTM To evaluate the effectiveness of tree LSTM in learning syntax-aware representations, we replace it with a syntax-agnostic chain LSTM. We build a bi-directional chain LSTM (denoted by A) over the entire passage to encode the individual words. Given a constituent C = wi · · · wj, we build another bi-directional chain LSTM (denoted by B) over C where the inputs are the encodings of words wi, · · · , wj generated by LSTM 1410 Parameter Tuning Range Best Choice Maximum expansion number M in constituent expansion 0, 1, 2, 3, 4, 5 2 Size of hidden state in all LSTMs 50, 100, 150, 200, 250, 300 100 Size of hidden state in prediction network 100, 200, 300, 400, 500 400 Table 1: Hyperparameter Tuning Exact Match (EM,%) F1 (%) Single model Logistic Regression (Rajpurkar et al., 2016) 40.0 51.0 Fine Grained Gating (Yang et al., 2016) 60.0 71.3 Dynamic Chunk Reader (Yu et al., 2016) 62.5 71.2 Match-LSTM with Answer Pointer (Wang and Jiang, 2016) 64.1 73.9 Dynamic Coattentation Network (Xiong et al., 2016) 65.4 75.6 Multi-Perspective Context Matching (Wang et al., 2016) 66.1 75.8 Recurrent Span Representations (Lee et al., 2016) 66.4 74.9 Bi-Directional Attention Flow (Seo et al., 2016) 68.0 77.3 Ensemble Fine Grained Gating (Yang et al., 2016) 62.4 73.4 Match-LSTM with Answer Pointer (Wang and Jiang, 2016) 67.6 76.8 Recurrent Span Representations (Lee et al., 2016) 68.2 76.7 Multi-Perspective Context Matching (Wang et al., 2016) 69.4 78.6 Dynamic Coattentation Network (Xiong et al., 2016) 70.3 79.4 Bi-Directional Attention Flow (Seo et al., 2016) 73.3 81.1 CCNN Ablation (single model) Replacing tree LSTM with chain LSTM 63.5 73.9 Replacing chain-of-trees LSTM with independent tree LSTMs 64.8 75.2 Removing the attention layer 63.9 74.3 Replacing tree-guided attention with flat attention 65.6 75.9 CCNN (single model) 69.3 78.5 CCNN (ensemble) 74.1 82.6 Table 2: Results on the development set A. In LSTM B, the forward hidden state of wj and backward state of wi are concatenated to represent C. Note that the attention mechanism remains intact, which is still guided by the parse tree. This replacement cause 5.8% and 4.6% drop of the EM and F1 scores respectively, which demonstrates the necessity of incorporating syntactic structure (via tree LSTM) into representation learning. Chain-of-Trees LSTM (CT-LSTM) We evaluate the effectiveness of CT-LSTM by comparing it with a bag of tree LSTMs: instead of using a chain LSTM to glue the tree LSTMs, we treat them as independent. Keeping the other modules intact and replacing CT-LSTM with a bag of independent tree LSTMs, the EM and F1 score drop 4.5% and 3.3% respectively. The advantage of CT-LSTM is that it enables the semantics of one sentence to be propagated to others, which makes multiple-sentence reasoning possible. Tree-Guided Attention (TGA) Mechanism To evaluate the effectiveness of TGA, we performed two studies. First, we take it off from the architecture. Then constituents in the passage are solely represented by the chain-of-trees LSTM encodings and the question sentence is represented by the tree LSTM encoding at the root of the parse tree. At test time, we concatenate the encodings of a candidate answer and the question as inputs of the prediction network. Removing the attention layer decreases the EM and F1 by 5.4% and 4.2% respectively, demonstrating the effectiveness of attention mechanism for question-aware representation learning. Second, we compare the tree-structured mech1411 0 69 49.6 63.7 1 91 61.8 72.1 2 69.3 78.5 3 66.2 77.1 4 57.4 72.8 5 52.9 70.1 0 20 40 60 80 100 0 1 2 3 4 5 Maximum Expansion Number M EM F1 40 50 60 70 80 90 1 2 3 4 5 6 7 >=8 Answer Length EM (MPCM) F1 (MPCM) EM (Our method) F1 (Our method) 0 20 40 60 80 100 F1 (DCN) F1 (Our method) Figure 6: Performance for different (a) M (expansion number), (b) answer length, (c) question type. anism in TGA with a flat-structure mechanism. For each constituent h(p) i in the passage, we compute its unnormalized score aij = h(p) i · h(q) j with every constituent h(q) j in the question (which has R constituents). Then a global softmax operation is applied to these scores, {˜aij}R j=1 = softmax({aij}R j=1), to project them into a probabilistic simplex. Finally, a flat summarization PR j=1 ˜aijh(q) j is computed and appended to h(p) i . Replacing TGA with flat-structure attention causes the EM and F1 to drop 3.7% and 2.6% respectively, which demonstrates the advantage of the tree-guided mechanism. Constituent Expansion We study how the maximum expansion number M affects performance. If M is too small, many correct answers are not contained in the candidate set, which results in low recall. If M is too large, excessive candidates are generated, making it harder to pick up the correct one. Figure 6(a) shows how EM and F1 vary as M increases, from which we can see a value of M in the middle ground achieves the best tradeoff. 3.4 Analysis In this section, we study how CCNN behaves across different answer length (number of words in the answer) and question types, which are shown in Figure 6(b) and (c). In Figure 6(b), we compare with the MPCM method (Wang et al., 2016). As answer length increases, the performance of both methods decreases. This is because for longer answers, it is more difficult to pinpoint the precise boundaries. The decreasing of F1 is slower than EM, because F1 is more elastic to small mismatches. Our method achieves larger improvement over MPCM at longer answers. We conjecture the reason is: longer answers have more complicated syntactic structure, which can be better captured by the tree LSTMs and tree-guided attention mechanism in our method. MPCM is built upon individual words and is syntax-agnostic. In Figure 6(c), we compare with DCN (Xiong et al., 2016) on 8 question types. Our method achieves significant improvement over DCN on four types: “what”, “where”, “why” and “other”. The answers of questions in these types are typically longer and have more complicated syntactic structure than the other four types where the answers are mostly entities (person, numeric, time, etc.). The syntax-aware nature of our method makes it outperform DCN whose model design does not explicitly consider syntactic structures. 4 Related Works Several neural network based approaches have been proposed to solve the SQuAD QA problem, which we briefly review from three aspects: candidate answer generation, representation learning and attention mechanism. Two ways were investigated for candidate answer generation: (1) chunking: candidates are preselected based on lexical and syntactic analysis, such as constituent parsing (Rajpurkar et al., 2016) and part-of-speech pattern (Yu et al., 2016); (2) directly predicting the start and end position of the answer span, using feed-forward neural network (Wang et al., 2016), LSTM (Seo et al., 2016), pointer network (Vinyals et al., 2015; Wang and Jiang, 2016), dynamic pointer decoder (Xiong et al., 2016). The representation learning in previous approaches is conducted over individual words using the following encoders: LSTM in (Wang et al., 2016; Xiong et al., 2016); bi-directional gated recurrent unit (Chung et al., 2014) in (Yu et al., 2016); match-LSTM in (Wang and Jiang, 2016); bi-directional LSTM in (Seo et al., 2016). In previous approaches, the attention (Bahdanau et al., 2014; Xu et al., 2015) mechanism is mostly word-based and flat-structured (Kadlec et al., 2016; Sordoni et al., 2016; Wang and Jiang, 1412 2016; Wang et al., 2016; Yu et al., 2016): the attention scores are computed between individual words, are normalized globally and are used to summarize word-level encodings in a flat manner. Cui et al. (2016); Xiong et al. (2016) explored a coattention mechanism to learn question-topassage and passage-to-question summaries. Seo et al. (2016) proposed to directly use the attention weights as augmented features instead of applying them for early summarization. 5 Conclusions and Future Work To solve the SQuAD question answering problem, we design a constituent centric neural network (CCNN), where the generation and representation learning of candidate answers are both based on constituents. We use a constituent expansion mechanism to produce candidate answers, which can greatly reduce the search space without losing the recall of hitting the correct answer. To represent these candidate answers, we propose a chain-of-trees LSTM to encode constituents and a tree-guided attention mechanism to learn question-aware representations. Evaluations on the SQuAD dataset demonstrate the effectiveness of the constituent-centric neural architecture. For future work, we will investigate the wider applicability of chain-of-trees LSTM as a general text encoder that can simultaneously capture local syntactic structure and long-range semantic dependency. It can be applied to named entity recognition, sentiment analysis, dialogue generation, to name a few. We will also apply the tree-guided attention mechanism to NLP tasks that need syntaxaware attention, such as machine translation, sentence summarization, textual entailment, etc. Another direction to explore is joint learning of syntactic parser and chain-of-trees LSTM. Currently, the two are separated, which may lead to suboptimal performance. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 . Danqi Chen, Jason Bolton, and Christopher D Manning. 2016. A thorough examination of the cnn/daily mail reading comprehension task. arXiv preprint arXiv:1606.02858 . Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 . Peter Clark and Oren Etzioni. 2016. My computer is an honor student-but how intelligent is it? standardized tests as a measure of ai. AI Magazine 37(1):5–12. Peter Clark, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter D Turney, and Daniel Khashabi. 2016. Combining retrieval, statistics, and inference to answer elementary science questions. In AAAI. pages 2580–2586. Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. 2016. Attention-overattention neural networks for reading comprehension. arXiv preprint arXiv:1607.04423 . Alex Graves, Navdeep Jaitly, and Abdel-rahman Mohamed. 2013. Hybrid speech recognition with deep bidirectional lstm. In Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop on. IEEE, pages 273–278. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems. pages 1693– 1701. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading children’s books with explicit memory representations. arXiv preprint arXiv:1511.02301 . Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. 2014. Learning to solve arithmetic word problems with verb categorization. In EMNLP. pages 523–533. Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. 2016. Text understanding with the attention sum reader network. arXiv preprint arXiv:1603.01547 . Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . Kenton Lee, Tom Kwiatkowski, Ankur Parikh, and Dipanjan Das. 2016. Learning recurrent span representations for extractive question answering. arXiv preprint arXiv:1611.01436 . Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. 1413 Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268 . Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250 . Matthew Richardson, Christopher JC Burges, and Erin Renshaw. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP. volume 3, page 4. Mrinmaya Sachan, Kumar Dubey, Eric P Xing, and Matthew Richardson. 2015. Learning answerentailing structures for machine comprehension. In ACL (1). pages 239–249. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603 . Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. 2016. Reasonet: Learning to stop reading in machine comprehension. arXiv preprint arXiv:1609.05284 . Alessandro Sordoni, Philip Bachman, Adam Trischler, and Yoshua Bengio. 2016. Iterative alternating neural attention for machine reading. arXiv preprint arXiv:1606.02245 . Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075 . Zhiyang Teng and Yue Zhang. 2016. Bidirectional tree-structured lstm with head lexicalization. arXiv preprint arXiv:1611.06788 . Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems. pages 2692–2700. Hai Wang and Mohit Bansal Kevin Gimpel David McAllester. 2015. Machine comprehension with syntax, frames, and semantics . Shuohang Wang and Jing Jiang. 2016. Machine comprehension using match-lstm and answer pointer. arXiv preprint arXiv:1608.07905 . Zhiguo Wang, Haitao Mi, Wael Hamza, and Radu Florian. 2016. Multi-perspective context matching for machine comprehension. arXiv preprint arXiv:1612.04211 . Caiming Xiong, Victor Zhong, and Richard Socher. 2016. Dynamic coattention networks for question answering. arXiv preprint arXiv:1611.01604 . Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C Courville, Ruslan Salakhutdinov, Richard S Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. Wikiqa: A challenge dataset for open-domain question answering. In EMNLP. Citeseer, pages 2013– 2018. Zhilin Yang, Bhuwan Dhingra, Ye Yuan, Junjie Hu, William W Cohen, and Ruslan Salakhutdinov. 2016. Words or characters? fine-grained gating for reading comprehension. arXiv preprint arXiv:1611.01724 . Yang Yu, Wei Zhang, Kazi Hasan, Mo Yu, Bing Xiang, and Bowen Zhou. 2016. End-to-end answer chunk extraction and ranking for reading comprehension. arXiv preprint arXiv:1610.09996 . 1414
2017
129
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 136–145 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1013 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 136–145 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1013 Deep Neural Machine Translation with Linear Associative Unit Mingxuan Wang1 Zhengdong Lu2 Jie Zhou2 Qun Liu4,5 1Mobile Internet Group, Tencent Technology Co., Ltd [email protected] 2DeeplyCurious.ai 3 Insititute of Deep Learning Research, Baidu Co., Ltd 4 Institute of Computing Technology, Chinese Academy of Sciences 5ADAPT Centre, School of Computing, Dublin City University Abstract Deep Neural Networks (DNNs) have provably enhanced the state-of-the-art Neural Machine Translation (NMT) with their capability in modeling complex functions and capturing complex linguistic structures. However NMT systems with deep architecture in their encoder or decoder RNNs often suffer from severe gradient diffusion due to the non-linear recurrent activations, which often make the optimization much more difficult. To address this problem we propose novel linear associative units (LAU) to reduce the gradient propagation length inside the recurrent unit. Different from conventional approaches (LSTM unit and GRU), LAUs utilizes linear associative connections between input and output of the recurrent unit, which allows unimpeded information flow through both space and time direction. The model is quite simple, but it is surprisingly effective. Our empirical study on Chinese-English translation shows that our model with proper configuration can improve by 11.7 BLEU upon Groundhog and the best reported results in the same setting. On WMT14 English-German task and a larger WMT14 English-French task, our model achieves comparable results with the state-of-the-art. 1 Introduction Neural Machine Translation (NMT) is an endto-end learning approach to machine translation which has recently shown promising results on multiple language pairs (Luong et al., 2015; Shen et al., 2015; Wu et al., 2016; Zhang et al., 2016; Tu et al., 2016; Zhang and Zong, 2016; Jean et al., 2015; Meng et al., 2015). Unlike conventional Statistical Machine Translation (SMT) systems (Koehn et al., 2003; Chiang, 2005; Liu et al., 2006; Xiong et al., 2006; Mi et al., 2008) which consist of multiple separately tuned components, NMT aims at building upon a single and large neural network to directly map input text to associated output text. Typical NMT models consists of two recurrent neural networks (RNNs), an encoder to read and encode the input text into a distributed representation and a decoder to generate translated text conditioned on the input representation (Sutskever et al., 2014; Bahdanau et al., 2014). Driven by the breakthrough achieved in computer vision (He et al., 2015; Srivastava et al., 2015), research in NMT has recently turned towards studying Deep Neural Networks (DNNs). Wu et al. (2016) and Zhou et al. (2016) found that deep architectures in both the encoder and decoder are essential for capturing subtle irregularities in the source and target languages. However, training a deep neural network is not as simple as stacking layers. Optimization often becomes increasingly difficult with more layers. One reasonable explanation is the notorious problem of vanishing/exploding gradients which was first studied in the context of vanilla RNNs (Pascanu et al., 2013b). Most prevalent approaches to solve this problem rely on short-cut connections between adjacent layers such as residual or fastforward connections (He et al., 2015; Srivastava et al., 2015; Zhou et al., 2016). Differ136 ent from previous work, we choose to reduce the gradient path inside the recurrent units and propose a novel Linear Associative Unit (LAU) which creates a fusion of both linear and nonlinear transformations of the input. Through this design, information can flow across several steps both in time and in space with little attenuation. The mechanism makes it easy to train deep stack RNNs which can efficiently capture the complex inherent structures of sentences for NMT. Based on LAUs, we also propose a NMT model , called DEEPLAU, with deep architecture in both the encoder and decoder. Although DEEPLAU is fairly simple, it gives remarkable empirical results. On the NIST Chinese-English task, DEEPLAU with proper settings yields the best reported result and also a 4.9 BLEU improvement over a strong NMT baseline with most known techniques (e.g, dropout) incorporated. On WMT English-German and English-French tasks, it also achieves performance superior or comparable to the state-of-the-art. 2 Neural machine translation A typical neural machine translation system is a single and large neural network which directly models the conditional probability p(y|x) of translating a source sentence x = {x1, x2, · · · , xTx} to a target sentence y = {y1, y2, · · · , yTy}. Attention-based NMT, with RNNsearch as its most popular representative, generalizes the conventional notion of encoder-decoder in using an array of vectors to represent the source sentence and dynamically addressing the relevant segments of them during decoding. The process can be explicitly split into an encoding part, a decoding part and an attention mechanism. The model first encodes the source sentence x into a sequence of vectors c = {h1, h2, · · · , hTx}. In general, hi is the annotation of xi from a bi-directional RNN which contains information about the whole sentence with a strong focus on the parts of xi. Then, the RNNsearch model decodes and generates the target translation y based on the context c and the partial traslated sequence y<t by maximizing the probability of p(yi|y<i, c). In the attention model, c is dynamically obtained according to the contribution of the source annotation made to the word prediction. This is called automatic alignment (Bahdanau et al., 2014) or attention mechanism (Luong et al., 2015), but it is essentially reading with content-based addressing defined in (Graves et al., 2014). With this addressing strategy the decoder can attend to the source representation that is most relevant to the stage of decoding. Deep neural models have recently achieved a great success in a wide range of problems. In computer vision, models with more than 100 convolutional layers have outperformed shallow ones by a big margin on a series of image tasks (He et al., 2015; Srivastava et al., 2015). Following similar ideas of building deep CNNs, some promising improvements have also been achieved on building deep NMT systems. Zhou et al. (2016) proposed a new type of linear connections between adjacent layers to simplify the training of deeply stacked RNNs. Similarly, Wu et al. (2016) introduced residual connections to their deep neural machine translation system and achieve great improvements. However the optimization of deep RNNs is still an open problem due to the massive recurrent computation which makes the gradient propagation path extremely tortuous. 3 Model Description In this section, we discuss Linear Associative Unit (LAU) to ease the training of deep stack of RNNs. Based on this idea, we further propose DEEPLAU, a neural machine translation model with a deep encoder and decoder. 3.1 Recurrent Layers A recurrent neural network (Williams and Zipser, 1989) is a class of neural network that has recurrent connections and a state (or its more sophisticated memory-like extension). The past information is built up through the recurrent connections. This makes RNN applicable for sequential prediction tasks of arbitrary length. Given a sequence of vectors x = {x1, x2, · · · , xT } as input, a standard RNN computes the sequence hidden states h = {h1, h2, · · · , hT } by iterating the following 137 equation from t = 1 to t = T: ht = φ(xt, ht−1) (1) φ is usually a nonlinear function such as composition of a logistic sigmoid with an affine transformation. 3.2 Gated Recurrent Unit It is difficult to train RNNs to capture longterm dependencies because the gradients tend to either vanish (most of the time) or explode. The effect of long-term dependencies is dropped exponentially with respect to the gradient propagation length. The problem was explored in depth by (Hochreiter and Schmidhuber, 1997; Pascanu et al., 2013b). A successful approach is to design a more sophisticated activation function than a usual activation function consisting of gating functions to control the information flow and reduce the propagation path. There is a long thread of work aiming to solve this problem, with the long short-term memory units (LSTM) being the most salient examples and gated recurrent unit (GRU) being the most recent one (Hochreiter and Schmidhuber, 1997; Cho et al., 2014). RNNs employing either of these recurrent units have been shown to perform well in tasks that require capturing long-term dependencies. GRU can be viewed as a slightly more dramatic variation on LSTM with fewer parameters. The activation function is armed with two specifically designed gates called update and reset gates to control the flow of information inside each hidden unit. Each hidden state at time-step t is computed as follows ht = (1 −zt) ⊙ht−1 + zt ⊙˜ht (2) where ⊙is an element-wise product, zt is the update gate, and ˜ht is the candidate activation. ˜ht = tanh(Wxhxt + Whh(rt ⊙ht−1)) (3) where rt is the reset gate. Both reset and update gates are computed as : rt = σ(Wxrxt + Whrht−1) (4) zt = σ(Wxzxt + Whzht−1) (5) This procedure of taking a linear sum between the existing state and the newly computed state is similar to the LSTM unit. 3.3 Linear Associative Unit GRU can actually be viewed as a non-linear activation function with gating mechanism. Here we propose LAU which extends GRU by having an additional linear transformation of the input in its dynamics. More formally, the state update function becomes ht =((1 −zt) ⊙ht−1 + zt ⊙˜ht) ⊙(1 −gt) + gt ⊙H(xt). (6) Here the updated ht has three sources: 1) the direct transfer from previous state ht−1, 2) the candidate update ˜ht, and 3) a direct contribution from the input H(xt). More specifically, ˜ht contains the nonlinear information of the input and the previous hidden state. ˜ht = tanh(ft ⊙(Wxhxt)+rt ⊙(Whhht−1)), (7) where ft and rt express how much of the nonlinear abstraction are produced by the input xt and previous hidden state ht, respectively. For simplicity, we set ft = 1 −rt in this paper and find that this works well in our experiments. The term H(xt) is usually an affine linear transformation of the input xt to mach the dimensions of ht, where H(xt) = Wxxt. The associated term gt (the input gate) decides how much of the linear transformation of the input is carried to the hidden state and then the output. The gating function rt (reset gate) and zt (update gate) are computed following Equation (4) and (5) while gt is computed as gt = σ(Wxgxt + Whght−1). (8) The term gt ⊙H(xt) therefore offers a direct way for input xt to go to later hidden layers, which can eventually lead to a path to the output layer when applied recursively. This mechanism is potentially very useful for translation where the input, no matter whether it is the source word or the attentive reading (context), should sometimes be directly carried to the next stage of processing without any substantial composition or nonlinear transformation. To understand this, imagine we want to translate a English sentence containing a relative rare entity name such as “Bahrain” to Chinese: LAU is potentially able to retain the embedding of this word in its hidden state, which 138 will otherwise be prone to serious distortion due to the scarcity of training instances for it. 3.4 DEEPLAU ••• ••• ••• ••• ••• ••• 𝑥1 𝑥2 𝑥𝑛 <s> 𝑦1 𝑦𝑚 Attention ••• ••• ••• softmax 𝑦1 𝑦2 </s> 𝑐𝑡 𝑠𝑖 Encoder Decoder Figure 1: DEEPLAU: a neural machine translation model with deep encoder and decoder. Graves et al. (2013) explored the advantages of deep RNNs for handwriting recognition and text generation. There are multiple ways of combining one layer of RNN with another. Pascanu et al. (2013a) introduced Deep Transition RNNs with Skip connections (DT(S)RNNs). Kalchbrenner et al. (2015) proposed to make a full connection of all the RNN hidden layers. In this work we employ vertical stacking where only the output of the previous layer of RNN is fed to the current layer as input. The input at recurrent layer ℓ(denoted as xℓ t) is exactly the output of the same time step at layer ℓ−1 (denoted as hℓ−1 t ). Additionally, in order to learn more temporal dependencies, the sequences can be processed in different directions. More formally, given an input sequence x = (x1, ..., xT ), the output on layer ℓis h(ℓ) t = ( xt, ℓ= 1 φℓ(h(ℓ) t+d, h(ℓ−1) t ), ℓ> 1 (9) where • h(ℓ) t gives the output of layer ℓat location t. • φ is a recurrent function and we choose LAUs in this work. • The directions are marked by a direction term d ∈{−1, 1}. If we fixed d to −1, the input will be processed in forward direction, otherwise backward direction. The deep architecture of DEEPLAU, as shown in Figure 1, consists of three parts: a stacked LAU-based encoder, a stacked LAUbased decoder and an improved attention model. Encoder One shortcoming of conventional RNNs is that they are only able to make use of previous context. In machine translation, where whole source utterances are transcribed at once, there is no reason not to exploit future context as well. Thus bi-directional RNNs are proposed to integrate information from the past and the future. The typical bidirectional approach processes the raw input in backward and forward direction with two separate layers, and then concatenates them together. Following Zhou et al. (2016), we choose another bidirectional approach to process the sequence in order to learn more temporal dependencies in this work. Specifically, an RNN layer processes the input sequence in forward direction. The output of this layer is taken by an upper RNN layer as input, processed in reverse direction. Formally, following Equation (9), we set d = (−1)ℓ. This approach can easily build a deeper network with the same number of parameters compared to the classical approach. The final encoder consists of Lenc layers and produces the output hLenc to compute the conditional input c to the decoder. Attention Model The alignment model αt,j scores how well the output at position t matches the inputs around position j based on s1 t−1 and hLenc j where hLenc j is the top-most layer of the encoder at step j and s1 t−1 is the first layer of decoder at step t −1. It is intuitively beneficial to exploit the information of yt−1 when reading from the source sentence representation, which is missing from the implementation of attention-based NMT in (Bahdanau et al., 2014). In this work, we build a more effective alignment path by feeding both the previous hidden state s1 t−1 and the context word yt−1 to the attention model, inspired by the recent implementation of attention-based 139 NMT1. The conditional input cj is a weighted sum of attention score αt,j and encoder output hLenc. Formally, the calculation of cj is cj = t=Lx X t=1 αt,jhLenc t (10) where et,j = vT a σ(Was1 t−1 + UahLenc j + Wyyt−1) αt,j = softmax(et,j). (11) σ is a nonlinear function with the information of yt−1 (its word embedding being yt−1) added. In our preliminary experiments, we found that GRU works slightly better than tanh function, but we chose the latter for simplicity. Decoder The decoder follows Equation (9) with fixed direction term d = −1. At the first layer, we use the following input: xt = [ct, yt−1] where yt−1 is the target word embedding at time step t, ct is dynamically obtained follows Equation (10). There are Ldec layers of RNNs armed with LAUs in the decoder. At inference stage, we only utilize the top-most hidden states sLdec to make the final prediction with a softmax layer: p(yi|y<i, x) = softmax(WosLdec i ) (12) . 4 Experiments 4.1 Setup We mainly evaluated our approaches on the widely used NIST Chinese-English translation task. In order to show the usefulness of our approaches, we also provide results on other two translation tasks: English-French, EnglishGerman. The evaluation metric is BLEU2 (Papineni et al., 2002). For Chinese-English, our training data consists of 1.25M sentence pairs extracted from 1github.com/nyu-dl/dl4mt-tutorial/ tree/master/session2 2 For Chinese-English task, we apply case-insensitive NIST BLEU. For other tasks, we tokenized the reference and evaluated the performance with multi-bleu.pl. The metrics are exactly the same as in previous work. LDC corpora3, with 27.9M Chinese words and 34.5M English words respectively. We choose NIST 2002 (MT02) dataset as our development set, and the NIST 2003 (MT03), 2004 (MT04) 2005 (MT05) and 2006 (MT06) datasets as our test sets. For English-German, to compare with the results reported by previous work (Luong et al., 2015; Zhou et al., 2016; Jean et al., 2015), we used the same subset of the WMT 2014 training corpus that contains 4.5M sentence pairs with 91M English words and 87M German words. The concatenation of news-test 2012 and news-test 2013 is used as the validation set and news-test 2014 as the test set. To evaluate at scale, we also report the results of English-French. To compare with the results reported by previous work on end-toend NMT (Sutskever et al., 2014; Bahdanau et al., 2014; Jean et al., 2015; Luong et al., 2014; Zhou et al., 2016), we used the same subset of the WMT 2014 training corpus that contains 12M sentence pairs with 304M English words and 348M French words. The concatenation of news-test 2012 and news-test 2013 serves as the validation set and news-test 2014 as the test set. 4.2 Training details Our training procedure and hyper parameter choices are similar to those used by (Bahdanau et al., 2014). In more details, we limit the source and target vocabularies to the most frequent 30K words in both Chinese-English and English-French. For English-German, we set the source and target vocabularies size to 120K and 80K, respectively. For all experiments, the dimensions of word embeddings and recurrent hidden states are both set to 512. The dimension of ct is also of size 512. Note that our network is more narrow than most previous work where hidden states of dimmention 1024 is used. we initialize parameters by sampling each element from the Gaussian distribution with mean 0 and variance 0.042. Parameter optimization is performed using stochastic gradient descent. Adadelta (Zeiler, 3The corpora include LDC2002E18, LDC2003E07, LDC2003E14, Hansards portion of LDC2004T07, LDC2004T08 and LDC2005T06. 140 SYSTEM MT03 MT04 MT05 MT06 AVE. Existing systems Moses 31.61 33.48 30.75 30.85 31.67 Groundhog 31.92 34.09 31.56 31.12 32.17 COVERAGE 34.49 38.34 34.91 34.25 35.49 MEMDEC 36.16 39.81 35.91 35.98 36.95 Our deep NMT systems DEEPGRU 33.21 36.76 33.05 33.30 34.08 DEEPLAU 39.35 41.15 38.07 37.29 38.97 DEEPLAU +Ensemble + PosUnk 42.21 43.85 44.75 42.58 43.35 Table 1: Case-insensitive BLEU scores on Chinese-English translation. 2012) is used to automatically adapt the learning rate of each parameter (ϵ = 10−6 and ρ = 0.95). To avoid gradient explosion, the gradients of the cost function which had ℓ2 norm larger than a predefined threshold τ were normalized to the threshold (Pascanu et al., 2013a). We set τ to 1.0 at the beginning and halve the threshold until the BLEU score does not change much on the development set. Each SGD is a mini-batch of 128 examples. We train our NMT model with the sentences of length up to 80 words in the training data, while for the Moses system we use the full training data. Translations are generated by a beam search and log-likelihood scores are normalized by sentence length. We use a beam width of 10 in all the experiments. Dropout was also applied on the output layer to avoid over-fitting. The dropout rate is set to 0.5. Except when otherwise mentioned, NMT systems are have 4 layers encoders and 4 layers decoders. 4.3 Results on Chinese-English Translation Table 1 shows BLEU scores on ChineseEnglish datasets. Clearly DEEPLAU leads to a remarkable improvement over their competitors. Compared to DEEPGRU, DEEPLAU is +4.89 BLEU score higher on average four test sets, showing the modeling power gained from the liner associative connections. We suggest it is because LAUs apply adaptive gate function conditioned on the input which make it able to automatically decide how much linear information should be transferred to the next step. To show the power of DEEPLAU, we also make a comparison with previous work. Our best single model outperforms both a phrasedbased MT system (Moses) as well as an open source attention-based NMT system (Groundhog) by +7.3 and +6.8 BLEU points respectively on average. The result is also better than some other state-of-the-art variants of attention-based NMT mode with big margins. After PosUnk and ensemble, DEEPLAU seizes another notable gain of +4.38 BLEU and outperform Moses by +11.68 BLEU. 4.4 Results on English-German Translation The results on English-German translation are presented in Table 2. We compare our NMT systems with various other systems including the winning system in WMT14 (Buck et al., 2014), a phrase-based system whose language models were trained on a huge monolingual text, the Common Crawl corpus. For end-toend NMT systems, to the best of our knowledge, Wu et al. (2016) is currently the SOTA system and about 4 BLEU points on top of previously best reported results even though Zhou et al. (2016) used a much deeper neural network4. Following Wu et al. (2016), the BLEU score represents the averaged score of 8 models we trained. Our approach achieves comparable results with SOTA system. As can be seen from the Table 2, DeepLAU performs better than the word based model and even not much worse than the best wordpiece models achieved by Wu et al. (2016). Note that DEEPLAU are sim4It is also worth mentioning that the result reported by Zhou et al. (2016) does not include PosUnk, and this comparison is not fair enough. 141 SYSTEM Architecture Voc. BLEU Existing systems Buck et al. (2014) Winning WMT14 system phrase-based + large LM 20.7 Jean et al. (2015) gated RNN with search + LV + PosUnk 500K 19.4 Luong et al. (2015) LSTM with 4 layers + dropout + local att. + PosUnk 80K 20.9 Shen et al. (2015) gated RNN with search + PosUnk + MRT 80K 20.5 Zhou et al. (2016) LSTM with 16 layers + F-F connections 80K 20.6 Wu et al. (2016) LSTM with 8 laysrs + RL-refined Word 80K 23.1 Wu et al. (2016) LSTM with 8 laysrs + RL-refined WPM-32K 24.6 Wu et al. (2016) LSTM with 8 laysrs + RL-refined WPM-32K + Ensemble 26.3 Our deep NMT systems this work DEEPLAU 80K 22.1(±0.3) this work DEEPLAU + PosUnk 80K 23.8(±0.3) this work DEEPLAU + PosUnk + Ensemble 8 models 80K 26.1 Table 2: Case-sensitive BLEU scores on German-English translation. ple and easy to implement, as opposed to previous models reported in Wu et al. (2016), which dependends on some external techniques to achieve their best performance, such as their introduction of length normalization, coverage penalty, fine-tuning and the RL-refined model. 4.5 Results on English-French Translation SYSTEM BLEU Enc-Dec (Luong et al., 2014) 30.4 RNNsearch (Bahdanau et al., 2014) 28.5 RNNsearch-LV (Jean et al., 2015) 32.7 Deep-Att (Zhou et al., 2016) 35.9 DEEPLAU 35.1 Table 3: English-to-French task: BLEU scores of single neural models. To evaluate at scale, we also show the results on an English-French task with 12M sentence pairs and 30K vocabulary in Table 3. Luong et al. (2014) achieves BLEU score of 30.4 with a six layers deep Encoder-Decoder model. The two attention models, RNNSearch and RNNsearch-LV achieve BLEU scores of 28.5 and 32.7 respectively. The previous best single NMT Deep-Att model with an 18 layers encoder and 7 layers decoder achieves BLEU score of 35.9. For DEEPLAU, we obtain the BLEU score of 35.1 with a 4 layers encoder and 4 layers decoder, which is on par with the SOTA system in terms of BLEU. Note that Zhou et al. (2016) utilize a much larger depth as well as external alignment model and extensive regularization to achieve their best results. 4.6 Analysis Then we will study the main factors that influence our results on NIST Chinese-English translation task. We also compare our approach with two SOTA topologies which were used in building deep NMT systems. • Residual Networks (ResNet) are among the pioneering works (Szegedy et al., 2016; He et al., 2016) that utilize extra identity connections to enhance information flow such that very deep neural networks can be effectively optimized. Share the similar idea, Wu et al. (2016) introduced to leverage residual connections to train deep RNNs. • Fast Forward (F-F) connections were proposed to reduce the propagation path length which is the pioneer work to simplify the training of deep NMT model (Zhou et al., 2016). The work can be viewed as a parametric ResNet with short cut connections between adjacent layers. The procedure takes a linear sum between the input and the newly computed state. LAU vs. GRU Table 4 shows the effect of the novel LAU. By comparing row 3 to row 7, we see that when LEnc and LDec are set to 2, 142 SYSTEM (Lenc,LDec) width AVE. 1 DEEPGRU (2,1) 512 33.59 2 DEEPGRU (2,2) 1024 34.68 3 DEEPGRU (2,2) 512 34.91 4 DEEPGRU (4,4) 512 34.08 5 4+ResNet (4,4) 512 36.40 6 4+F-F (4,4) 512 37.62 7 DEEPLAU (2,2) 512 37.65 8 DEEPLAU (4,4) 512 38.97 9 DEEPLAU (8,6) 512 39.01 10 DEEPLAU (8,6) 256 38.91 Table 4: BLEU scores of DEEPLAU and DEEPGRU with different model sizes. the average BLEU scores achieved by DEEPGRU and DEEPLAU are 34.68 and 37.65, respectively. LAU can bring an improvement of 2.97 in terms of BLEU. After increasing the model depth to 4 (row 4 and row 6), the improvement is enlarged to 4.91. When DEEPGRU is trained with larger depth (say, 4), the training becomes more difficult and the performance falls behind its shallow partner. While for DEEPLAU, as can be see in row 9, with increasing the depth even to LEnc = 8 and LDec = 6 we can still obtain growth by 0.04 BLEU score. Compared to previous shortcut connection methods (row 5 and row 6), The LAU still achieve meaningful improvements over F-F connections and Residual connections by +1.35 and +2.57 BLEU points respectively. DEEPLAU introduces more parameters than DEEPGRU. In order to figure out the effect of DEEPLAU comparing models with the same parameter size, we increase the hidden size of DEEPGRU model. Row 3 shows that, after using a twice larger GRU layer, the BLEU score is 34.68, which is still worse than the corresponding DEEPLAU model with fewer parameters. Depth vs. Width Next we will study the model size. In Table 4, starting from LEnc = 2 and LDec = 2 and gradually increasing the model depth, we can achieve substantial improvements in terms of BLEU. With LEnc = 8 and LDec = 6, our DEEPLAU model yields the best BLEU score. We tried to increase the model depth with the same hidden size but failed to see further improvements. We then tried to increase the hidden size. By comparing row 2 and row 3, we find the improvements is relative small with a wider hidden size. It is also worth mentioning that a deep and thin network with fewer parameters can still achieve comparable results with its shallow partner. This suggests that depth plays a more important role in increasing the complexity of neural networks than width and our deliberately designed LAU benefit from the optimizing of such a deep model. 26 28 30 32 34 36 38 40 42 44 10 20 30 40 50 60 BLEU(%) sentence length (Merge) LAU(4/4) LAU(2/2) GRU(4/4) Figure 2: The BLEU scores of generated translations on the merged four test sets with respect to the lengths of source sentences. About Length A more detailed comparison between DEEPLAU (4 layers encoder and 4 layers decoder), DEEPLAU(2 layer encoder and 2 layer decoder) and DEEPGRU (4 layers encoder and 4 layers decoder), suggest that with deep architectures are essential to the superior performance of our system. In particular, we test the BLEU scores on sentences longer than {10, 20, 30, 40, 50, 60} on the merged test set. Clearly, in all curves, performance degrades with increased sentence length. However, DEEPLAU models yield consistently higher BLEU scores than the DEEPGRU model on longer sentences. These observations are consistent with our intuition that very deep RNN model is especially good at modeling the nested latent structures on relatively complicated sentences and LAU plays an important role on optimizing such a complex deep model. 143 5 Conclusion We propose a Linear Associative Unit (LAU) which makes a fusion of both linear and nonlinear transformation inside the recurrent unit. On this way, gradients decay much slower compared to the standard deep networks which enable us to build a deep neural network for machine translation. Our empirical study shows that it can significantly improve the performance of NMT. 6 acknowledge We sincerely thank the anonymous reviewers for their thorough reviewing and valuable suggestions. Wang’s work is partially supported by National Science Foundation for Deep Semantics Based Uighur to Chinese Machine Translation (ID 61662077). Qun Liu’s work is partially supported by Science Foundation Ireland in the ADAPT Centre for Digital Content Technology (www.adaptcentre.ie) at Dublin City University funded under the SFI Research Centres Programme (Grant 13/RC/2106) cofunded under the European Regional Development Fund. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 . Christian Buck, Kenneth Heafield, and Bas Van Ooyen. 2014. N-gram counts and language models from the common crawl. In LREC. Citeseer, volume 2, page 4. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics, pages 263–270. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 . Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850 . Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. arXiv preprint arXiv:1410.5401 . Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385 . Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pages 770–778. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 1–10. http://www.aclweb.org/anthology/P15-1001. Nal Kalchbrenner, Ivo Danihelka, and Alex Graves. 2015. Grid long short-term memory. arXiv preprint arXiv:1507.01526 . Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1. Association for Computational Linguistics, pages 48–54. Yang Liu, Qun Liu, and Shouxun Lin. 2006. Treeto-string alignment template for statistical machine translation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 609– 616. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025 . Minh-Thang Luong, Ilya Sutskever, Quoc V Le, Oriol Vinyals, and Wojciech Zaremba. 2014. Addressing the rare word problem in neural machine translation. arXiv preprint arXiv:1410.8206 . Fandong Meng, Zhengdong Lu, Zhaopeng Tu, Hang Li, and Qun Liu. 2015. Neural transformation machine: A new architecture for sequenceto-sequence learning. CoRR abs/1506.06442. http://arxiv.org/abs/1506.06442. 144 Haitao Mi, Liang Huang, and Qun Liu. 2008. Forest-based translation. In ACL. pages 192– 199. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics. Association for Computational Linguistics, pages 311–318. Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2013a. How to construct deep recurrent neural networks. arXiv preprint arXiv:1312.6026 . Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013b. On the difficulty of training recurrent neural networks. ICML (3) 28:1310–1318. Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2015. Minimum risk training for neural machine translation. arXiv preprint arXiv:1512.02433 . Rupesh K Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Training very deep networks. In Advances in neural information processing systems. pages 2377–2385. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems. pages 3104–3112. Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alex Alemi. 2016. Inceptionv4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261 . Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. ArXiv eprints, January . Ronald J Williams and David Zipser. 1989. A learning algorithm for continually running fully recurrent neural networks. Neural computation 1(2):270–280. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144 . Deyi Xiong, Qun Liu, and Shouxun Lin. 2006. Maximum entropy based phrase reordering model for statistical machine translation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 521–528. Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 . Biao Zhang, Deyi Xiong, and Jinsong Su. 2016. Variational neural machine translation. arXiv preprint arXiv:1605.07869 . Jiajun Zhang and Chengqing Zong. 2016. Exploiting source-side monolingual data in neural machine translation. In Proceedings of EMNLP. Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. 2016. Deep recurrent models with fastforward connections for neural machine translation. arXiv preprint arXiv:1606.04199 . 145
2017
13
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1415–1425 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1130 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1415–1425 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1130 Cross-lingual Distillation for Text Classification Ruochen Xu Carnegie Mellon Universit [email protected] Yiming Yang Carnegie Mellon Universit [email protected] Abstract Cross-lingual text classification(CLTC) is the task of classifying documents written in different languages into the same taxonomy of categories. This paper presents a novel approach to CLTC that builds on model distillation, which adapts and extends a framework originally proposed for model compression. Using soft probabilistic predictions for the documents in a label-rich language as the (induced) supervisory labels in a parallel corpus of documents, we train classifiers successfully for new languages in which labeled training data are not available. An adversarial feature adaptation technique is also applied during the model training to reduce distribution mismatch. We conducted experiments on two benchmark CLTC datasets, treating English as the source language and German, French, Japan and Chinese as the unlabeled target languages. The proposed approach had the advantageous or comparable performance of the other state-of-art methods. 1 Introduction The availability of massive multilingual data on the Internet makes cross-lingual text classification (CLTC) increasingly important. The task is defined as to classify documents in different languages using the same taxonomy of predefined categories. CLTC systems build on supervised machine learning require a sufficiently amount of labeled training data for every domain of interest in each language. But in reality, labeled data are not evenly distributed among languages and across domains. English, for example, is a label-rich language in the domains of news stories, Wikipedia pages and reviews of hotels, products, etc. But many other languages do not necessarily have such rich amounts of labeled data. This leads to an open challenge in CLTC, i.e., how can we effectively leverage the trained classifiers in a label-rich source language to help the classification of documents in other label-poor target languages? Existing methods in CLTC use either a bilingual dictionary or a parallel corpus to bridge language barriers and to translate classification models (Xu et al., 2016) or text data(Zhou et al., 2016a). There are limitations and challenges in using either type of resources. Dictionary-based methods often ignore the dependency of word meaning and its context, and cannot leverage domainspecific disambiguation when the dictionary on hand is a general-purpose one. Parallel-corpus based methods, although more effective in deploying context (when combined with word embedding in particular), often have an issue of domain mismatch or distribution mismatch if the available source-language training data, the parallel corpus (human-aligned or machine-translation induced one) and the target documents of interest are not in exactly the same domain and genre(Duh et al., 2011). How to solve such domain/distribution mismatch problems is an open question for research. This paper proposes a new parallel-corpus based approach, focusing on the reduction of domain/distribution matches in CLTC. We call this approach Cross-lingual Distillation with Feature Adaptation or CLDFA in short. It is inspired by the recent work in model compression (Hinton et al., 2015) where a large ensemble model is transformed to a compact (small) model. The assumption of knowledge distillation for model compression is that the knowledge learned by the large model can be viewed as a mapping from in1415 put space to output (label) space. Then, by training with the soft labels predicted by the large model, the small model can capture most of the knowledge from the large model. Extending this key idea to CLTC, if we see parallel documents as different instantiations of the same semantic concepts in different languages, a target-language classifier should gain the knowledge from a welltrained source classifier by training with the targetlanguage part of the parallel corpus and the soft labels made by the source classifier on the source language side. More specifically, we propose to distillate knowledge from the source language to the target language in the following 2-step process: • Firstly, we train a source-language classifier with both labeled training documents and adapt it to the unlabeled documents from the source-language side of the parallel corpus. The adaptation enforces our classifier to extract features that are: 1) discriminative for the classification task and 2) invariant with regard to the distribution shift between training and parallel data. • Secondly, we use the trained source-language classifier to obtain the soft labels for a parallel corpus, and the target-language part of the parallel corpus to train a target classifier, which yields a similar category distribution over target-language documents as that over source-language documents. We also use unlabeled testing documents in the target language to adapt the feature extractor in this training step. Intuitively, the first step addresses the potential domain/distribution mismatch between the labeled data and the unlabeled data in the source language. The second step addresses the potential mismatch between the target-domain training data (in the parallel corpus) and the test data (not in the parallel corpus). The soft-label based training of target classifiers makes our approach unique among parallel-corpus based CLTC methods (Section 2.1. The feature adaptation step makes our framework particularly robust in addressing the distributional difference between in-domain documents and parallel corpus, which is important for the success of CLTC with low-resource languages. The main contributions in this paper are the following: • We propose a novel framework (CLDFA) for knowledge distillation in CLTC through a parallel corpus. It has the flexibility to be built on a large family of existing monolingual text classification methods and enables the use of a large amount of unlabeled data from both source and target language. • CLDFA has the same computational complexity as the plug-in text classification method and hence is very efficient and scalable with the proper choice of plug-in text classifier. • Our evaluation on benchmark datasets shows that our method had a better or at least comparable performance than that of other stateof-art CLTC methods. 2 Related Work Related work can be outlined with respect to the representative work in CLTC and the recent progress in deep learning for knowledge distillation. 2.1 CLTC Methods One branch of CLTC methods is to use lexical level mappings to transfer the knowledge from the source language to the target language. The work by Bel et al. (Bel et al., 2003) was the first effort to solve CLTC problem. They translated the target-language documents to source language using a bilingual dictionary. The classifier trained in the source language was then applied on those translated documents. Similarly, Mihalcea et al. (Mihalcea et al., 2007) built cross-lingual classifier by translating subjectivity words and phrases in the source language into the target language. Shi et al. (Shi et al., 2010) also utilized a bilingual dictionary. Instead of translating the documents, they tried to translate the classification model from source language to target language. Prettenhofer and Stein. (Prettenhofer and Stein, 2010) also used the bilingual dictionary as a word translation oracle and built their CLTC system on structural correspondence learning, a theory for domain adaptation. A more recent work by (Xu et al., 2016) extended seminal bilingual dictionaries with unlabeled corpora in low-resource languages. Chen et al. (Chen et al., 2016) used bilingual word embedding to map documents in source and target 1416 language into the same semantic space, and adversarial training was applied to enforce the trained classifier to be language-invariant. Some recent efforts in CLTC focus on the use of automatic machine translation (MT) technology. For example, Wan (Wan, 2009) used machine translation systems to give each document a source-language and a target-language version, where one version is machine-translated from the another one. A co-training (Blum and Mitchell, 1998) algorithm was applied on two versions of both source and target documents to iterative train classifiers in both languages. MTbased CLTC also include the work on multi-view learning with different algorithms, such as majority voting(Amini et al., 2009), matrix completion(Xiao and Guo, 2013) and multi-view coregularization(Guo and Xiao, 2012a). Another branch of CLTC methods focuses on representation learning or the mapping of the induced representations in cross-language settings (Guo and Xiao, 2012b; Zhou et al., 2016a, 2015, 2016b; Xiao and Guo, 2013; Jagarlamudi et al., 2011; De Smet et al., 2011; Vinokourov et al., 2002; Platt et al., 2010; Littman et al., 1998). For example, Meng et al. (Meng et al., 2012) and Lu et al. (Lu et al., 2011) used a parallel corpus to learn word alignment probabilities in a pre-processing step. Some other work attempts to find a languageinvariant (or interlingua) representation for words or documents in different languages using various techniques, such as latent semantic indexing (Littman et al., 1998), kernel canonical correlation analysis (Vinokourov et al., 2002), matrix completion(Xiao and Guo, 2013), principal component analysis (Platt et al., 2010) and Bayesian graphical models (De Smet et al., 2011). 2.2 Knowledge Distillation The idea of distilling knowledge in a neural network was proposed by Hinton et al (Hinton et al., 2015), in which they introduced a student-teacher paradigm. Once the cumbersome teacher network was trained, the student network was trained according to soften predictions of the teacher network. In the field of computer vision, it has been empirically verified that student network trained by distillation performs better than the one trained with hard labels. (Hinton et al., 2015; Romero et al., 2014; Ba and Caruana, 2014). Gupta et al.(Gupta et al., 2015) transfers supervision between images from different modalities(e.g. from RGB image to depth image). There are also some recent works applied distillation in the field of natural language. For example, Lili et al. (Mou et al., 2015) distilled task specific knowledge from a set of high-dimensional embeddings to a lowdimensional space. Zhiting et al. used an iterative distillation method to transfer the structured information of logic rules into the weights of a neural network. Kim et al. (Kim and Rush, 2016) applied knowledge distillation approaches in the field of machine translation to reduce the size of neural machine translation model. Our framework shares the same purpose of existing works that transfer knowledge between models of different properties, such as model complexity, modality, and structured logic. However, our transfer happens between models working on different languages. To the best of knowledge, this is the first work using knowledge distillation to bridge the language gap for NLP tasks. 3 Preliminary 3.1 Task and Notation CLTC aims to use the training data in the source language to build a model applicable in the target language. In our setting, we have labeled data in source language Lsrc = {xi, yi}L i=1, where xi is the labeled document in source language and yi is the label vector. We then have our test data in the target language, given by Ttgt = {x′ i}T i=1. Our framework can also use unlabeled documents from both languages in transductive learning settings. We use Usrc = {xi}M i=1 to denote sourcelanguage unlabeled documents,Utgt = {x′ i}N i=1 to denote target-language unlabeled documents, and Uparl = {(xi, x′ i)}P i=1 to denote a unlabeled bilingual parallel corpus where xi and x′ i are paired document translations of each other. We assume that the unlabeled parallel corpus does not overlap with the source-language training documents and the target-language test documents. 3.2 Convolutional Neural Network (CNN) as a Plug-in Classifier We use a state-of-the-art CNN-based neural network classifier (Kim, 2014) as the plug-in classifier in our framework. Instead of using a bag-ofwords representation for each document, the CNN model concatenates the word embeddings (vertical vectors) of each input document into a n × k 1417 matrix, where n is the length (number of word occurrences) of the document, and k is the dimension of word embedding. Denoting by x1:n = x1 ⊕x2 ⊕... ⊕xn as the resulted matrix, with ⊕the concatenation operator. One-dimensional convolutional filter w ∈Rhk with window size h operates on every consecutive h words, with non-linear function f and bias b. For window of size h started at index i, the feature after convolutional filter is given by: ci = f(w · xi:i+h−1 + b) A max-over-time pooling (Collobert et al., 2011) is applied on c over all possible positions such that each filter extracts one feature. The model uses multiple filters with different window sizes. The concatenated outputs from filters consist the feature of each document. We can see the convolutional filters and pooling layers as feature extractor f = Gf(x, θf), where θf contains parameters for embedding layer and convolutional layer. Theses features are then passed to a fully connected softmax layer to produce probability distributions over labels. We see the final fully connected softmax layer as a label classifier Gy(f, θy) that takes the output f from the feature extractor. The final output of model is given by Gy(Gf(x, θf), θy), which is jointly parameterized by {θf, θy} We want to emphasize that our choice of the plug-in classifier here is mainly for its simplicity and scalability to demonstrate our framework. There is a large family of neural classifiers for monolingual text classification that could be used in our framework as well, including other convolutional neural networks by (Johnson and Zhang, 2014), the recurrent neural networks by (Lai et al., 2015; Zhang et al., 2016; Johnson and Zhang, 2016; Sutskever et al., 2014; Dai and Le, 2015), the attention mechanism by (Yang et al., 2016), the deep dense network by (Iyyer et al., 2015), and more. 4 Proposed Framework Let us introduce two versions of our model for cross-language knowledge distillation, i.e., the vanilla version and the full version with feature adaptation. Both are supported by the proposed framework. We denote the former by CLD-KCNN and the latter by CLDFA-KCNN. 4.1 Vanilla Distillation Without loss of generality, assume we are learning a multi-class classifier for the target language. We have y ∈1, 2, ..., |v| where v is the set of all possible classes. We assume the base classification network produces real number logits qj for each class. For example, for the case of CNN text classifier, the logits can be produced by a linear transformation which takes features extracted max-pooling layer and outputs a vector of size |v|. The logits are converted into probabilities of classes through the softmax layer, by normalizing each qj with all other logits. pj = exp(qj/T) P|v| k=1 exp(qk/T) (1) where T is a temperature and is normally set to 1. Using a higher value of T produces a softer probability distribution over classes. The first step of our framework is to train the source-language classifier on labeled source documents Lsrc. We use standard temperature T = 1 and cross-entropy loss as the objective to minimize. For each example and its label (xi, yi) from the source training set, we have: L(θsrc) = − X (xi,yi)∈Lsrc |v| X k=1 1{yi = k} log p(y = k|xi; θsrc) (2) where p(y = k|x; θsrc) is source model controlled by parameter θsrc and 1{·} is the indicator function. In the second step, the knowledge captured in θsrc is transferred to the distilled model in the target language by training it on the parallel corpus. The intuition is that paired documents in parallel corpus should have the same distribution of class predicted by the source model and target model. In the simplest version of our framework, for each source-language document in the parallel corpus, we predict a soft class distribution by source model with high temperature. Then we minimize the cross-entropy between soft distribution produced by source model and the soft distribution produced by target model on the paired documents in the target language. More formally, we optimize θtgt according to the following loss 1418 function for each document pair (xi, x′ i) in parallel corpus. L(θtgt) = − X (xi,x′ i)∈Uparl |v| X k=1 p(y = k|xi; θsrc) log p(y = k|x′ i; θtgt) (3) During distillation, the same high temperature is used for training target model. After it has been trained, we set the temperature to 1 for testing. We can show that under some assumptions, the two-step cross-lingual distillation is equivalent to distilling a target-language classifier in the targetlanguage input space. Lemma 1. Assume the parallel corpus {xi, x′ i} ∈ Uparl is generated by x′ i ∼p(X′; η) and xi = t(x′ i), where η controls the marginal distribution of xi and t is a differentiable translation function with integrable derivative. Let fθsrc(t(x′)) be the function that outputs soft labels of p(y = k|t(x′); θsrc). The distillation given by equation 3 can be interpreted as distillation of a target language classifier fθsrc(t(x′)) on target language documents sampled from p(X′; η). fθsrc(t(x′)) is the classifier that takes input of target documents, translates them into source documents through t and makes prediction using the source classifier. If we further assume the testing documents have the same marginal distribution P(X′; η), then the distilled classifier should have similar generalization power as fθsrc(t(x′)). Theorem 2. Let source training data xi ∈ Lsrc has marginal distribution p(X; λ). Under the assumptions of lemma 1, further assume p(t(x′); λ) = p(x′; η), p(y|t(x′)) = p(y|x′) and t′(x′) ≈ C, where C is a constant. Then fθsrc(t(x′)) actually minimizes the expected loss in target language data Ex′∼p(X;η),y∼p(Y |x′)[L y, f(t(x′))  ]. Proof. By definition of equation 2, fθsrc(x) minimizes the expected loss Ex∼p(X;λ),y∼p(Y |x)[L y, f(x)  ], where L is cross-entropy loss in our case. Then we can write Ex∼p(X;λ),y∼p(Y |x)[L y, f(x)  ] = Z p(x; λ) X y p(y|x)L y, f(x)  dx = Z p(t(x′); λ) X y p(y|t(x′))L y, f(t(x′))  t′(x′)dx′ ≈C Z p(x′; η) X y p(y|x′)L y, f(t(x′))  dx′ =CEx′∼p(X;η),y∼p(Y |x′)[L y, f(t(x′))  ] 4.2 Distillation with Adversarial Feature Adaptation 15 10 5 0 5 10 15 20 25 20 15 10 5 0 5 10 15 Figure 1: Extracted features for source-language documents in the English-Chinese Yelp Hotel Review dataset. Red dots represent features of the documents in Lsrc and green dots represent the features of documents in Uparl, which is a generalpurpose parallel corpus. Although vanilla distillation is intuitive and simple, it cannot handle distribution mismatch issues. For example, the marginal feature distributions of source-language documents in Lsrc and Uparl could be different, so are the distributions of target-language documents in Uparl and Ttgt. According to theorem 2, the vanilla distillation works for the best performance under unrealistic assumption: p(t(x′)|λ) = p(x′|η). To further illustrate our point, we trained a CNN classifier according to equation 2 and used the features extracted by Gf to present the source-language documents in both Lsrc and Uparl. Then we projected the highdimensional features onto a 2-dimensional space via t-Distributed Stochastic Neighbor Embedding (t-SNE)(Maaten and Hinton, 2008). This resulted 1419 the visualization of the project data in Figures 1 and 2. It is quite obvious in Figure 1 that the generalpurpose parallel corpus has a very different feature distribution from that of the labeled source training set. Even for machine-translated parallel data from the same domain, as shown in figure 2, there is still a non-negligible distribution shift from the source language to the target language for the extracted features. Our interpretation of this observation is that when the MT system (e.g. Google Translate) is a general-purpose one, it non-avoidably add translation ambiguities which would lead the distribution shift from the original domain. To address the distribution divergence brought by either a general-purpose parallel corpus or an imperfect MT system, we seek to adapt the features extraction part of our neural classifier such that the feature distributions on both sides should be close as possible in the newly induced feature space. We adapt the adversarial training method by (Ganin and Lempitsky, 2014) to the cross-lingual settings in our problems. Given a set of training set of L = {xi, yi}i=1,...,N and an unlabeled set U = {x′ i}i=1,...,M, our goal is to find a neural classifier Gy(Gf(x, θf), θy), which has good discriminative performance on L and also extracts features which have similar distributions on L and U. One way to maximize the similarity of two distributions is to maximize the loss of a discriminative classifier whose job is to discriminate the two feature distributions. We denote this classifier by Gd(·, θd), which is parameterized by θd. At training time, we seek θf to minimize the loss of Gy and maximize the loss of Gd. Meanwhile, θy and θd are also optimized to minimize their corresponding loss. The overall optimization could be summarized as follows: E(θf, θy, θd) = X xi,yi∈L Ly(yi, Gy(Gf(xi, θf), θy)) −α X xi∈L Ld(0, Gd(Gf(xi, θf), θd)) −α X xj∈U Ld(1, Gd(Gf(xj, θf), θd)) where Ly is the loss function for true labels y, Ld is loss function for binary labels indicating the source of data and α is the hyperparameter that controls the relative importance of two losses. We optimize θf, θy for minimizing E and optimize θd for maximizing E. We jointly optimize θf, θy, θd through the gradient reversal layer(Ganin and Lempitsky, 2014). We use this feature adaptation technique to firstly adapt the source-language classifier to the source-language documents of the parallel corpus. When training the target-language classifier by matching soft labels on the parallel corpus, we also adapt the classifier to the target testing documents. We use cross-entropy loss functions as Ly and Ld for both feature adaptation. 5 Experiments and Discussions 5.1 Dataset Our experiments used two benchmark datasets, as described below. (1) Amazon Reviews Language Domain # of Documents English book 50000 DVD 30000 music 25220 German book 165470 DVD 91516 music 60392 French book 32870 DVD 9358 music 15940 Japanese book 169780 DVD 68326 music 55892 Table 1: Dataset Statistics for the Amazon reviews dataset We used the multilingual multi-domain Amazon review dataset created by Prettenhofer and Stein (Prettenhofer and Stein, 2010). The dataset contains Amazon reviews in three domains: book, DVD and music. Each domain has the reviews in four different languages: English, German, French and Japanese. We treated English as the source language and the rest three as the target languages, respectively. This gives us 9 tasks (the product of the 3 domains and the 3 target languages) in total. For each task, there are 1000 positive and 1000 negative reviews in English and the target language, respectively. (Prettenhofer and Stein, 2010) also provides 2000 parallel reviews per task, 1420 15 10 5 0 5 10 15 20 20 15 10 5 0 5 10 15 (a) Germany:DVD 15 10 5 0 5 10 15 20 15 10 5 0 5 10 15 20 (b) French:Music 15 10 5 0 5 10 15 15 10 5 0 5 10 15 20 (c) Japanese:Book Figure 2: Extracted features for the source-language documents in the Amazon Reviews dataset. Red dots represent the features of the labeled training documents in Lsrc, and green dots represent the features of the documents in Uparl, which are the machine-translated documents from a target language. Below each figure is the target language and the domain of review (Section 5.1). that were generated using Google Translate 1, and used by us for cross-language distillation. There are also several thousands of unlabeled reviews in each language. The statistics of unlabeled data is summarized in Table 1. All the reviews are tokenized using standard regular expressions except for Japanese, for which we used a publicly available segmenter 2. (2) English-Chinese Yelp Hotel Reviews This dataset was firstly used for CLTC by (Chen et al., 2016). The task is to make sentence-level sentiment classification with 5 labels(rating scale from 1 to 5), using English as the source language and Chinese as the target language. The labeled English data consists of balanced labels of 650k Yelp reviews from Zhang et al. (Zhang et al., 2015). The Chinese data includes 20k labeled Chinese hotel reviews and 1037k unlabeled ones from (Lin et al., 2015). Following the approach by (Chen et al., 2016), we use 10k of labeled Chinese data as validation set and another 10k hotel reviews as held-out test data. We a random sample of 500k parallel sentences from UM-courpus(Tian et al., 2014), which is a general-purpose corpus designed for machine translation. 5.2 Baselines We compare the proposed method with other stateof-the-art methods as outlined below. (1) Parallel-Corpus based CLTC Methods Methods in this category all use an unlabeled parallel corpus. Methods named PL-LSI (Littman 1translate.google.com 2https://pypi.python.org/pypi/tinysegmenter et al., 1998), PL-OPCA (Platt et al., 2010) and PL-KCAA (Vinokourov et al., 2002) learn latent document representations in a shared lowdimensional space by performing the Latent Semantic Indexing (LSI), the Oriented Principal Component Analysis (OPCA) and a kernel (namely KCAA) for the parallel text. PL-MC (Xiao and Guo, 2013) recovers missing features via matrix Completion, and also uses LSI to induce a latent space for parallel text. All these methods train a classifier in the shared feature space with labeled training data from both the source and target languages. (2) MT-based CLTC Methods The methods in this category all use an MT system to translate each test document in the target language to the source language in the testing phase. The prediction on each translated document is made by a source-language classifier, which can be a Logistic Regression model (MT+LR) (Chen et al., 2016) or a deep averaging network (MT+DAN) (Chen et al., 2016). (3) Adversarial Deep Averaging Network Similar to our approach, the adversarial Deep Averaging Network (ADAN) also exploits adversarial training for CLTC (Chen et al., 2016). However, it does not have the parallel-corpus based knowledge distillation part (which we do). Instead, it uses averaged bilingual embeddings of words as its input and adapts the feature extractor to produce similar features in both languages. We also include the results of mSDA for the Yelp Hotel Reviews dataset. mSDA (Chen et al., 2012) is a domain adaptation method based on 1421 Target Language Domain PL-LSI PL-KCCA PL-OPCA PL-MC CLD-KCNN CLDFA-KCNN German book 77.59 79.14 74.72 79.22 82.54 83.95* DVD 79.22 76.73 74.59 81.34 82.24 83.14* music 73.81 79.18 74.45 79.39 74.65 79.02 French book 79.56 77.56 76.55 81.92 81.6 83.37 DVD 77.82 78.19 70.54 81.97 82.41 82.56 music 75.39 78.24 73.69 79.3 83.01 83.31* Janpanese book 72.68 69.46 71.41 72.57 74.12 77.36* DVD 72.55 74.79 71.84 76.6 79.67 80.52* music 73.44 73.54 74.96 76.21 73.69 76.46 Averaged Accuracy 75.78 76.31 73.64 78.72 79.33 81.08* Table 2: Accuracy scores of methods on the Amazon Reviews dataset: the best score in each row (a task) is highlighted in bold face. If the score of CLDFA-KCNN is statistically significantly better (in one-sample proportion tests) than the best among the baseline methods, it is marked using a star. Model Accuracy mSDA 31.44% MT-LR 34.01% MT-DAN 39.66% ADAN 41.04% CLD-KCNN 40.96% CLDFA-KCNN 41.82% Table 3: Accuracy scores of methods on the English-Chinese Yelp Hotel Reviews dataset stacked denoising autoencoders, which has been proved to be effective in cross-domain sentiment classification evaluations. We show the results reported by (Chen et al., 2012), where they used bilingual word embedding as input for mSDA. 5.3 Implementation Detail We pre-trained both the source and target classifier with unlabeled data in each language. We ran word2vec(Mikolov et al., 2013) 3 on the tokenized unlabeled corpus. The learned word embeddings are used to initialize the word embedding look-up matrix, which maps input words to word embeddings and concatenates them into input matrix. We fine-tuned the source-language classifier on the English training data with 5-fold crossvalidation. For English-Chinese Yelp-hotel review dataset, the temperature T(Section 4.1) in distillation is tuned on validation set in the target language. For Amazon review dataset, since there is no default validation set, we set temperature from low to high in {1, 3, 5, 10} and take the average among all predictions. 3https://code.google.com/archive/p/word2vec/ 5.4 Main Results In tables 2 and 3 we compare the results of our methods (the vanilla version CLD-KCNN and the full version CLDFA-KCNN) with those of other methods based on the published results in the literature. The baseline methods are different in these two tables as they were previously evaluated (by their authors) on different benchmark datasets. Clearly, CLDFA-KCNN outperformed the other methods on all except one task in these two datasets, showing that knowledge distillation is successfully carried out in our approach. Noticing that CLDFA-KCNN outperformed CLD-KCNN, showing the effectiveness of adversarial feature extraction in reducing the distribution mismatch between the parallel corpus and the train/test data in the target domain. We should also point out that in Table 2, the four baseline methods (PL-LSI, PL-KCCA, PL-OPCA and PL-MC) were evaluated under the condition of using additional 100 labeled target documents for training, according to the author’s report (Xiao and Guo, 2013). On the other hand, our methods (CLD-KCNN and CLDFA-KCNN) were evaluated under a tougher condition, i.e., not using any labeled data in the target domains. We also test our framework when a few training documents in the target language are available. A simple way to utilize the target-language supervision is to fit the target-language model with labeled target data after optimizing with our crosslingual distillation framework. The performance of CLD-KCNN and CLDFA-KCNN trained with different sizes of labeled target-language data is shown in figure 3. We also compare the performance of training the same classifier using only 1422 the target-language labels(Target Only in figure 3). As we can see, our framework can efficiently utilize the extra supervision and improve the performance over the training using only the targetlanguage labels. The margin is most significant when the size of the target-language label is relatively small. 0 100 200 300 400 500 600 700 800 Size of labeled target data 0.60 0.65 0.70 0.75 0.80 0.85 0.90 Accuracy Target Only CLD-KCNN CLDFA-KCNN Figure 3: Accuracy scores of methods using varying sizes of target-language labeled data on the Amazon review dataset. The target language is German and the domain is music. The parallel corpus has a fixed size of 1000 and the size of the labeled target-language documents is shown on the x-axis 6 Conclusion This work introduces a novel framework for distillation of discriminative knowledge across languages, providing effective and efficient algorithmic solutions for addressing domain/distribution mismatch issues in CLTC. The excellent performance of our approach is evident in our evaluation on two CLTC benchmark datasets, compared to that of other state-of-the-art methods. Acknowledgement We thank the reviewers for their helpful comments. This work is supported in part by Defense Advanced Research Projects Agency Information Innovation Oce (I2O), the Low Resource Languages for Emergent Incidents (LORELEI) Program, Issued by DARPA/I2O under Contract No. HR0011-15-C-0114, by the National Science Foundation (NSF) under grant IIS-1546329. References Massih Amini, Nicolas Usunier, and Cyril Goutte. 2009. Learning from multiple partially observed views-an application to multilingual text categorization. In Advances in neural information processing systems. pages 28–36. Jimmy Ba and Rich Caruana. 2014. Do deep nets really need to be deep? In Advances in neural information processing systems. pages 2654–2662. Nuria Bel, Cornelis HA Koster, and Marta Villegas. 2003. Cross-lingual text categorization. Research and Advanced Technology for Digital Libraries pages 126–139. Avrim Blum and Tom Mitchell. 1998. Combining labeled and unlabeled data with co-training. In Proceedings of the eleventh annual conference on Computational learning theory. ACM, pages 92–100. Minmin Chen, Zhixiang Xu, Kilian Weinberger, and Fei Sha. 2012. Marginalized denoising autoencoders for domain adaptation. arXiv preprint arXiv:1206.4683 . Xilun Chen, Ben Athiwaratkun, Yu Sun, Kilian Weinberger, and Claire Cardie. 2016. Adversarial deep averaging networks for cross-lingual sentiment classification. arXiv preprint arXiv:1606.01614 . Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12(Aug):2493–2537. Andrew M Dai and Quoc V Le. 2015. Semi-supervised sequence learning. In Advances in Neural Information Processing Systems. pages 3079–3087. Wim De Smet, Jie Tang, and Marie-Francine Moens. 2011. Knowledge transfer across multilingual corpora via latent topics. In Pacific-Asia Conference on Knowledge Discovery and Data Mining. Springer, pages 549–560. Kevin Duh, Akinori Fujino, and Masaaki Nagata. 2011. Is machine translation ripe for cross-lingual sentiment classification? In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2. Association for Computational Linguistics, pages 429–433. Yaroslav Ganin and Victor Lempitsky. 2014. Unsupervised domain adaptation by backpropagation. arXiv preprint arXiv:1409.7495 . Yuhong Guo and Min Xiao. 2012a. Cross language text classification via subspace co-regularized multiview learning. arXiv preprint arXiv:1206.6481 . Yuhong Guo and Min Xiao. 2012b. Transductive representation learning for cross-lingual text classification. In Data Mining (ICDM), 2012 IEEE 12th International Conference on. IEEE, pages 888–893. 1423 Saurabh Gupta, Judy Hoffman, and Jitendra Malik. 2015. Cross modal distillation for supervision transfer. arXiv preprint arXiv:1507.00448 . Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 . Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum´e III. 2015. Deep unordered composition rivals syntactic methods for text classification. In Proceedings of the Association for Computational Linguistics. Jagadeesh Jagarlamudi, Raghavendra Udupa, Hal Daum´e III, and Abhijit Bhole. 2011. Improving bilingual projections via sparse covariance matrices. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 930–940. Rie Johnson and Tong Zhang. 2014. Effective use of word order for text categorization with convolutional neural networks. arXiv preprint arXiv:1412.1058 . Rie Johnson and Tong Zhang. 2016. Supervised and semi-supervised text categorization using lstm for region embeddings. In Proceedings of The 33rd International Conference on Machine Learning. pages 526–534. Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882 . Yoon Kim and Alexander M Rush. 2016. Sequencelevel knowledge distillation. arXiv preprint arXiv:1606.07947 . Siwei Lai, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Recurrent convolutional neural networks for text classification. In AAAI. pages 2267–2273. Yiou Lin, Hang Lei, Jia Wu, and Xiaoyu Li. 2015. An empirical study on sentiment classification of chinese review using word embedding. arXiv preprint arXiv:1511.01665 . Michael L Littman, Susan T Dumais, and Thomas K Landauer. 1998. Automatic cross-language information retrieval using latent semantic indexing. In Cross-language information retrieval, Springer, pages 51–62. Bin Lu, Chenhao Tan, Claire Cardie, and Benjamin K Tsou. 2011. Joint bilingual sentiment classification with unlabeled parallel corpora. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1. Association for Computational Linguistics, pages 320–330. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of Machine Learning Research 9(Nov):2579–2605. Xinfan Meng, Furu Wei, Xiaohua Liu, Ming Zhou, Ge Xu, and Houfeng Wang. 2012. Cross-lingual mixture model for sentiment classification. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long PapersVolume 1. Association for Computational Linguistics, pages 572–581. Rada Mihalcea, Carmen Banea, and Janyce M Wiebe. 2007. Learning multilingual subjective language via cross-lingual projections . Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 . Lili Mou, Ge Li, Yan Xu, Lu Zhang, and Zhi Jin. 2015. Distilling word embeddings: An encoding approach. arXiv preprint arXiv:1506.04488 . John C Platt, Kristina Toutanova, and Wen-tau Yih. 2010. Translingual document representations from discriminative projections. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 251–261. Peter Prettenhofer and Benno Stein. 2010. Crosslanguage text classification using structural correspondence learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 1118–1127. Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. 2014. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550 . Lei Shi, Rada Mihalcea, and Mingjun Tian. 2010. Cross language text classification by model translation and semi-supervised learning. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1057–1067. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems. pages 3104–3112. Liang Tian, Derek F Wong, Lidia S Chao, Paulo Quaresma, Francisco Oliveira, and Lu Yi. 2014. Um-corpus: A large english-chinese parallel corpus for statistical machine translation. In LREC. pages 1837–1842. Alexei Vinokourov, John Shawe-Taylor, and Nello Cristianini. 2002. Inferring a semantic representation of text via cross-language correlation analysis. In NIPS. volume 1, page 4. Xiaojun Wan. 2009. Co-training for cross-lingual sentiment classification. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL 1424 and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1Volume 1. Association for Computational Linguistics, pages 235–243. Min Xiao and Yuhong Guo. 2013. A novel two-step method for cross language representation learning. In Advances in Neural Information Processing Systems. pages 1259–1267. Ruochen Xu, Yiming Yang, Hanxiao Liu, and Andrew Hsi. 2016. Cross-lingual text classification via model translation with limited dictionaries. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. ACM, pages 95–104. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. . Rui Zhang, Honglak Lee, and Dragomir Radev. 2016. Dependency sensitive convolutional neural networks for modeling sentences and documents. arXiv preprint arXiv:1611.02361 . Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in neural information processing systems. pages 649–657. Huiwei Zhou, Long Chen, Fulin Shi, and Degen Huang. 2015. Learning bilingual sentiment word embeddings for cross-language sentiment classification. ACL. Xinjie Zhou, Xianjun Wan, and Jianguo Xiao. 2016a. Cross-lingual sentiment classification with bilingual document representation learning . Xinjie Zhou, Xiaojun Wan, and Jianguo Xiao. 2016b. Attention-based lstm network for cross-lingual sentiment classification . 1425
2017
130
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1426–1435 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1131 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1426–1435 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1131 Understanding and Predicting Empathic Behavior in Counseling Therapy Ver´onica P´erez-Rosas1, Rada Mihalcea1, Kenneth Resnicow2 Satinder Singh1 and Lawrence An3 1Computer Science and Engineering, 2School of Public Health 3Center for Health Communications Research University of Michigan {vrncapr,mihalcea,kresnic,baveja,lcan}@umich.edu Abstract Counselor empathy is associated with better outcomes in psychology and behavioral counseling. In this paper, we explore several aspects pertaining to counseling interaction dynamics and their relation to counselor empathy during motivational interviewing encounters. Particularly, we analyze aspects such as participants’ engagement, participants’ verbal and nonverbal accommodation, as well as topics being discussed during the conversation, with the final goal of identifying linguistic and acoustic markers of counselor empathy. We also show how we can use these findings alongside other raw linguistic and acoustic features to build accurate counselor empathy classifiers with accuracies of up to 80%. 1 Introduction Behavioral counseling is an important tool to address public health issues such as mental health, substance abuse, and nutrition problems among others. This has motivated increased interest in the study of mechanisms associated with successful interventions. Among them, counselor empathy has been identified as a key intervention component that relates to positive therapy outcomes. Displaying empathic behavior helps counselors to build rapport with their clients. Empathy levels experienced during counseling have a significant effect on treatment outcomes, as clients who perceive their counselor as empathic are more likely to improve than the ones who do not (Moyers and Miller, 2013). In this paper, we apply quantitative approaches to understand the dynamics of the counseling interactions and their relation to counselor empathy. We focus our analysis on counseling conducted using Motivational Interviewing (MI), a well-established evidence-based counseling style, where counselor empathy is defined as the active interest and effort to understand the client’s perspective (Miller and Rollnick, 2013). We address four main research questions. First, are there differences in how the counselor and the client engage during empathic conversations? We explore this question by conducting turn-by-turn word frequency analyses of participant’s interactions across the counseling conversations. Second, are there differences in verbal and vocal mimicry patterns occurring during high and low empathy interactions? We address this question by measuring the degree of language matching, verbal and nonverbal coordination, and power dynamics expressed during the interaction. Third, are there content differences in counselor discourse during high and low empathy interactions? We answer this question by applying topic modeling to identify the topics that are more salient in high and low empathy interventions (or in both). Finally, fourth, can we build accurate classifiers of counselor empathy? We show how the linguistic and acoustic empathy markers identified in our analyses, together with other raw features, can be used to construct classifiers able to predict counselor empathy with accuracies of up to 80%. 2 Related Work There have been several efforts to study the role of empathy during counseling interactions. (Xiao et al., 2012) applied a text-based approach to discriminate empathic from non-empathic encounters using word-frequency analysis. They conducted a set of experiments aiming to predict empathy at the utterance and session level on a manually annotated dataset. Results showed that empathy can 1426 be predicted at reasonable accuracy levels, comparable to human assessments. (Gibson et al., 2015) presented a more refined approach for this task, which in addition to n-grams included features derived from the Linguistic Inquire Word Count, LIWC (Tausczik and Pennebaker, 2010) as well as psycholinguistic norms. Other research has focused on exploring aspects related to counselor empathy skills, such as their ability to match the client language. (Lord et al., 2015) analyzed the language coordination between client and counselor using Language Style Synchrony (LSS), a measure of the degree of similarity in word usage among speakers in adjacent talking turns. They found that empathy scores are positively related to LSS, and that higher levels of LSS are likely to result in higher empathy scores. Another line of work has explored the use of the acoustic component to predict empathy levels during counseling encounters. (Xiao et al., 2014) presented a study on the automatic evaluation of counselor empathy based on the analysis of correlation between prosody patterns and the degree of empathy showed by the therapist during the counseling interactions. More recently, (Xiao et al., 2015) addressed the empathy prediction task by deriving language models from transcripts obtained by an automatic speech recognition system, thus eliminating the need of human intervention during speaker segmentation and transcription. Most of this previous research has focused on the prediction task, and explored a variety of linguistic and acoustic representations for this goal. While some of this work has explored the linguistic accommodation between speakers, previous methods have not fully explored the conversational aspects of the counseling interaction. In this paper, we seek to explore how conversational aspects such as engagement, accommodation, and discourse topics are related to counselor empathy by using strategies such as turn-byturn word frequency analysis, language coordination, power dynamics analysis, and topic modeling. Furthermore, we build accurate empathy classifiers that rely on acoustic and linguistic cues inspired by our conversational analyses. 3 Counseling Empathy Dataset The dataset used in this study consists of 276 MI audio-recorded sessions from: two clinical research studies on smoking cessation and medication adherence (Catley et al., 2012; Goggin et al., 2013); recordings of MI students from a graduatelevel MI course; wellness coaching phone calls; brief medical encounters in dental practice and student counseling. The dataset was obtained from a previous study conducted by the authors. Further details can be found in (P´erez-Rosas et al., 2016). The counseling sessions target three behavior changes: diet changes (72 sessions), smoking cessation (95 sessions), medication adherence (93 sessions). In addition, there are 16 sessions on miscellaneous topics. The full set comprises 97.8 hours of audio with an average session length of 20.8 minutes with a standard deviation of 11.5 minutes. 3.1 Data Preprocessing Before conducting our analysis on the collected dataset, we performed several preprocessing steps to ensure the confidentiality of the data and to enable automatic text and audio feature extraction. First, all the counseling recordings were subjected to an anonymization process. This includes manually trimming the audio to remove introductions, and inserting silences to replace references to participant’s name and location. Next, 162 sessions for which transcripts were not readily available were transcribed via Mechanical Turk (Marge et al., 2010) using the following guidelines: 1) transcribe speech turn by turn, 2) clearly identify the speaker (either client or counselor), 3) include speech disfluencies, such as false starts, repetitions of whole words or parts of words, and fillers. Transcriptions were manually verified at random points to avoid spam and ensure their quality. Since sessions were recorded in natural conditions, we applied speech enhancement methods to remove noise and improve the speech signal quality. We started by converting the audio signal from a stereo to a mono channel and to a uniform sample rate of 16k. We then applied the Mean Square Error estimation of spectral amplitude for audio denoising, as implemented in the Voicebox Speech Processing toolbox (Brookes, 2003). To allow for a turn-by-turn audio analysis of the counseling interaction, we processed the speech signal to separate client and counselor speech segments. To accomplish this task, we used on automatic speech-to-text forced alignment API.1 We 1YouTube Data API 1427 then used the automatically-obtained time stamps to segment the audio and derive speaker-specific speech segments for each counseling dyad. 3.2 Data Annotation Empathy assessments were obtained using the Motivational Interviewing Treatment Integrity (MITI) coding scheme version 4.1 (Moyers, 2014). Each session was assigned an empathy score using a 5-point Likert scale, which measures the extent to which the clinician understands or makes an effort to grasp the client’s perspective and feelings. The coding was conducted by two independent teams of three coders who had previous experience in MI and MI coding. Annotations were conducted using the session audio recording along with its transcript. The inter-rater reliability, measured in a random sample of 20 double coded sessions using the Intra-Class Correlation Coefficient was 0.60,2 suggesting that the annotators showed moderate agreement on empathy assessments. The reported annotation agreement was calculated on the original 5-scale empathy score and it is within the ranges reported in previous Motivational Interviewing studies (0.600.62). Because of the skewed frequency distribution of the empathy scores in the dataset, we decided to conduct our analyses using empathy as a binary outcome, by classifying scores from 1 to 3 as low empathy, and scores of 4 and 5 as high empathy. This resulted in 179 high empathy sessions and 97 low empathy sessions. 4 Empathic vs Non-Empathic Interactions: Counselor Engagement We start by exploring differences in verbal exchange length between low and high empathy encounters as an indirect measure of participants engagement during the conversation. In this analysis, we account for the time dimension by segmenting the conversation into five equal portions. First, we look at the ratio of words exchanged between the counselor and the client for the different fractions of the conversation.3 As shown in Figure 1, low empathy interactions present noticeably 2ICC scores were obtained using a two-way mixed model with absolute agreement. 3This ratio is calculated for each pair of turns in the conversation, and it is simply measured as the number of words uttered by the counselor divided by the number of words uttered by the client. The turn-level word ratios are then averaged for all the turns included in a portion of the conversation. 0 20 40 60 80 100 0.6 0.7 0.8 0.9 1 Portion of conversation (% of turns) Client/Counselor word ratio by turn High empathy Low empathy Figure 1: Word ratio by turn between clients and counselors as the conversation progresses. 0 20 40 60 80 100 15 20 25 30 35 Portion of conversation (% of turns) Average words per turn High empathy - Counselor Low empathy - Counselor High empathy - Client Low empathy - Client Figure 2: Average words per turn by counselors and clients as the conversation progresses. lower ratio of words exchanged between counselors and clients across the interaction, while high empathy exchanges show consistently higher levels of interaction. This can be further observed in Figure 2, which shows that more empathic counselors speak considerably less than their clients, and that their less empathic counterparts. This is in line with findings in MI literature indicating that counselors who reduce the amount of time they talk with their clients are likely to allow more time for the patient to talk and explore their concerns, thus improving the perception of empathy and understanding. 5 The Role of Verbal and Nonverbal Accommodation in Empathy Accommodation in health care communication involves counselor and client coordination including participation in communication and decision making, and shared understanding (D’Agostino 1428 and Bylund, 2014). We analyze the accommodation and its relation to empathy by exploring verbal and nonverbal behaviors exhibited by counseling participants during MI encounters. In addition to accommodation assessments, we explore the direction of the accommodation phenomena, i.e., whether the counselor is mirroring or leading the client. 5.1 Verbal Accommodation In order to explore how verbal accommodation phenomena in our dataset relate to the MITI empathy assessments, we use two methods that are drawn from the Conversation Accommodation Theory. The first one is the Linguistic Style Matching (LSM) proposed in (Gonzales et al., 2009) to quantify to which extent one speaker, i.e., the counselor, matches the language of the other, i.e., the client. The second one is the Linguistic Style Coordination (LSC) metric proposed in (Danescu-Niculescu-Mizil et al., 2011), which quantifies the degree to which one individual immediately echoes the linguistic style of the person they are responding to. Both metrics are evaluated across eight linguistic markers from the LIWC dictionary (Tausczik and Pennebaker, 2010) (i.e., quantifiers, conjunctions, adverbs, auxiliary verbs, prepositions, articles, personal pronouns and impersonal pronouns). LSM produces a score ranging between 0 and 1 indicating how much one person uses types of words comparable to the other person, while LSC generates a coordination score in the range of -1 to 1 indicating the degree of immediate coordination between speakers. While both measures are designed to analyze verbal synchrony, they can reveal different aspects of the counseling interaction. We use LSM to explore the potential match of language between counselors and clients across the counseling interaction, and we use LSC to quantify whether the counselor use of a specific linguistic marker in a given turn increases the probability of the client using the same marker during their reply. In addition, we use LSC to investigate power differences during the conversation based on the amount of coordination displayed by either counselor or client, under the assumption that the speaker who accommodates less holds the most power during the conversation (Danescu-Niculescu-Mizil et al., 2012). Figure 3 shows the average LSM scores for 0-20 40-60 60-80 80-100 0.6 0.7 0.8 Portion of the conversation (% of turns) Linguistic Style Matching High empathy Low empathy Figure 3: Linguistic style matching across five equal segments of the conversation duration. eight linguistic markers measured on five equal segments of the conversation duration. As expected, we observe an increasing trend of language style matching during the counseling interaction in both high-empathic and low-empathic encounters, as people usually match their language unconsciously and regardless of the outcome of the conversation (Niederhoffer and Pennebaker, 2002). Interestingly, counselors and clients present a higher degree of language matching during high empathy encounters, while speakers in low empathy encounters show lower levels of style matching. We evaluate the immediate LSC in two directions: coordination of counselors toward clients, and coordination of clients toward counselors. Results indicate low levels of immediate coordination in both cases, with values ranging between -0.06 and 0.1. Nonetheless, the results also suggest that clients coordinate more than counselors, with LSC(client,counselor)=-0.030 compared to LSC(counselor, client)=-0.038, which further suggests that counselors have more power (control) during the conversation.4 Analyses of the LSC levels from counselors to clients on different linguistic markers across highempathic and low-empathic interactions provide interesting findings. While counselors generally show lower levels of coordination in the use of prepositions, auxiliary verbs, and personal pronouns (Figure 4), low-empathic counselors show higher LSC levels than their high-empathic counterparts. This can be attributed to the use of con4The differences in coordination showed during the analyses are statistically significant (two tailed t-test, p=0.0156) 1429 −0.06 −0.04 −0.02 PREP AUXVERB PPRON OTHER Linguistic style coordination High empathy Low empathy Figure 4: Linguistic style coordination from counselors to clients. OTHER include: quantifiers, conjunctions, adverbs, articles, and impersonal pronouns. −0.06 −0.04 −0.02 0 OTHER ART QUANT Linguistic style coordination High empathy Low empathy Figure 5: Linguistic style coordination from clients to counselors. OTHER include: conjunctions, adverbs, auxiliary verbs, prepositions, personal pronouns and impersonal pronouns. frontational language (e.g., I, could, should, and have), which is often associated with low empathy. Similar analyses on the client side, shown in Figure 5, indicate significant differences in the use of linguistic markers by the client (except for articles and quantifiers). In particular, during low empathy encounters, clients coordinate more on the use of conjunctions, adverbs, auxiliary verbs, prepositions, personal pronouns, and impersonal pronouns. 5.2 Nonverbal Accommodation Empathy is also shown through nonverbal channels such as visual and acoustics (Regenbogen et al., 2012). We explore the role of nonverbal mirroring in empathy by looking at vocal synchrony patterns shared between counselors and clients during the counseling interaction. We focus our analysis on vocal pitch, which is defined as the psychological perception of the voice frequency in terms of how high or how low it sounds. Pitch carries information about the speaker’s emotional state, and has been shown to be related to the perception of empathy in psychotherapy (Reich et al., 2014). We evaluate speech synchrony during turntaking trajectories in the conversation. We con20 40 60 80 100 4 · 10−2 6 · 10−2 8 · 10−2 0.1 0.12 0.14 Portion of the conversation (% of turns) Pitch correlation High empathy Low empathy Figure 6: Pitch correlation among participants during counselor following turns as the conversation progresses. sider two cases: sequences where the counselor replies to the client statements (e.g., rephrasing), and sequences where the counselor leads the interaction (e.g., asking questions). Starting with the turn-by-turn segmentation,5 we extract pitch (F0) on each speaker-specific segment using OpenEar (Eyben et al., 2009).6 We then measure the correlation of all pitch values during counselor following turns and during counselor leading turns across the entire therapy session.7 Figures 6 and 7 show the trends in pitch synchrony across high-empathic and low-empathic encounters in the dataset. In the first figure, we observe that when replying to clients, counselors who are given low empathy scores show higher vocal synchrony levels than counselors who receive higher empathy scores. A potential explanation for this finding is that a counselor who mirrors the client pitch might amplify the emotional distress of the client, or may suggest the counselor’s lack of confidence or knowledge (Reich et al., 2014). On the other hand, we observe the opposite trend for the counselor leading sequences, where higher vocal synchrony levels are observed during high empathy interactions, which can be at5On average, there are approximately 40 counselor-client turns per conversation 6The feature extraction was done at audio-frame level every 10ms with a 25ms Hamming window. 7The terms of “counselor following” and “counselor leading” simply refer to how the correlation is computed. In “counselor following,” we consider the set of counselor utterances and the previous client utterances; in “counselor leading,” we consider the set of counselor utterances and the following client utterances. 1430 20 40 60 80 100 6 · 10−2 8 · 10−2 0.1 0.12 0.14 Portion of the conversation (% of turns) Pitch correlation High empathy Low empathy Figure 7: Pitch correlation among participants during counselor leading turns as the conversation progresses. tributed to clients mirroring the counselor speech. The similarity is noticeably higher at the beginning of the conversation and gradually decreases as the conversation progresses. Moreover, the differences are not significant for the 40-100% turns, but results for the first 20% suggest significant differences at least in the beginning of the conversation (p < 0.05). This further confirm similarities during verbal and nonverbal accommodation, similar to how in section 5.1 we found that during high-empathic encounters, counselors hold control of the conversation and clients accommodate more than counselors. 6 Topics Discussed during Counseling Interaction and their Relation to Empathy We also conduct content analysis on the counseling interactions, to identify themes discussed in high-empathic and low-empathic encounters. For this task, we employ the Meaning Extraction Method (MEM) (Chung and Pennebaker, 2008), a topic extraction method that identifies the most common words used in a set of documents, and clusters them into coherent themes by analyzing their co-occurrences. MEM has been used in the past in the psychotherapy domain to analyze salient topics in depression forums (RamirezEsparza et al., 2008) and also to investigate differences in topics discussed by patients given their therapy outcomes, i.e., therapeutic gain or unsuccessful therapy (Wolf et al., 2010). Our analyses are conducted on counselor turns only, thus all the client turns are removed from Behavior target Sample words Medication adherence Adherence, dose, window, target, adherent, maintain, track Smoking cessation Cigarette, nicotine, risk, addiction, smoke, withdrawal Weight management diet, weight, eat, food, meal, lose, gain, cook, exercise Table 1: Three behavior change targets in the dataset each session transcript. We use the Meaning Extraction Helper tool (Boyd, 2016) to conduct the text preprocessing tasks, which include tokenizing and lemmatizing the words in each session, as well as removing function words. We keep only words who appear in at least 10% of the transcripts with a minimum frequency of 50. From the resulting list, we remove adjectives, adverbs, and verbs and keep only nouns as they usually refer to one definite class thus helping us to identify less ambiguous topics. Using the resulting noun list, with 339 entries, we generate a binary vector for each document, indicating the presence or absence of each noun in the document. We then run a Principal Component Analysis (PCA), followed by varimax rotation on the document matrix to find clusters of co-occurring nouns. The initial PCA shows that the first three components consist mainly of domain specific nouns. Notably, this accurately captures the presence of the three main behavior change targets discussed in the dataset, i.e., medication adherence, smoking cessation, and weight management; sample words from each component are shown in Table 1. In order to identify topics potentially related to the counseling skill, we decided to remove the domain words from the analysis, which resulted in 250 nouns. Next, we use the same PCA configuration on the binary document matrix and rerun the experiment, which this time leads to 98 components. Following PCA literature recommendations (Velicer and Fava, 1998), we retain only components with at least three variables with loadings greater than 0.30, which leads to 14 components. We then re-run PCA forcing a 14 components solution; these components explain 35% of the total variance in the original matrix. Finally, we use the method proposed in (Wilson et al., 2016) to measure the degree to which a particular MEM topic (component) is used during highempathic and low-emphatic encounters. 1431 Topic Sample nouns Score Concerns Concern, scare, overwhelm, diagnose 2.52 Importance Importance, reason, maintain, sense, increase 1.41 Inform Information, schedule, discuss, read 1.14 Reflections sound, start, look, mention, past, notice 1.19 Change Health, past, experience, decision, realize, difficult, impact 1.27 Goals Reach, choose, period, stick, idea, study, record 1.57 Motivation Plan, motivate, routine, motivation, group, progress, fun 1.10 Support Family, care, worry, job, lifestyle, job, focus, issue 0.92 Feelings Worry, deal, stuck, struggle, leave 0.91 Guide Stop, reduce, attempt, spend, replacement 0.79 Resistance Trouble, barrier, fear, reach, involve, cover 073 Persuade Routine, track, strategy, recommend, affect 0.64 Persuade Stop, increase, decrease, benefit, consequences 0.455 Plan Activity, strategy, barrier, couple 0.292 Table 2: Topics extracted by the MEM from MI sessions, along with sample nouns and salient topic scores. Table 2 shows the scores assigned to each topic. In this table, scores greater than 1 correspond to topics salient in high empathy encounters while scores lower than 1 indicate topics salient in low empathy encounters. From this table, we can derive interesting observations. First, during high-empathic encounters, counselors seem to pay more attention to patient concerns, provide information, use reflective language, and talk about change. Second, during less empathic encounters, counselors seem to persuade and direct more, as well as focus on client’s resistance to change. Interestingly, topics that are identified as dominant in less empathic interactions are also related to MI non-adherent behavior, which means the counselors are not following the MI strategy (Rollnick et al., 2008). Finally, regardless of the empathy shown during the encounter, counselors discuss patients’ support system and feelings at similar rates (values closer to 1), which is expected when following the MI strategy. 7 Prediction of Counselor Empathy In the previous sections, we provided evidence of important differences in linguistic and verbal behaviors exhibited by counselors and clients during high-empathic and low-empathic MI encounters. In this section, we explore the use of linguistic and acoustic cues to build a computational model that predicts counselor empathy during MI encounters. The feature set consists of the cues identified during our exploratory analyses as potential indicators of counselor empathy, as well as additional text and audio features used during standard NLP and acoustic feature extraction. The text-based features are extracted from the manual transcripts of the sessions, while the audio-based features are extracted from audio segments obtained by force-aligning each session transcript with its corresponding audio. However, as future work, we are considering to automatize this process by conducting automatic speaker diarization and transcription via automatic speech recognition. During our experiments, we first explore the predictive power of each cue separately, followed by an integrated model that attempts to combine the linguistic and acoustic cues to improve the prediction of counselor empathy. All the experiments are performed using a Random Forest (Breiman, 2001) classifier. Given the large number of features, we use feature selection based on information gain to identify the best set of features during each experiment. During this process we keep at least 20% of the features in each set. Evaluations are conducted using leaveone-session-out cross-validation. The feature selection algorithm is run on each training fold before the model is trained, and the final model includes the best subset of attributes. As a reference value, we use a majority class baseline, obtained by selecting high empathy as the default class, which corresponds to 64% accuracy. 7.1 Linguistic and Acoustic Features Engagement: These features represent the participant’s engagement during the conversation as described in Section 4. They are evaluated at 20% increments of the conversation duration and also at conversation (session) level. The features are listed in Table 3. Linguistic accommodation: We measure the LSM and LSC metrics as described in section 5.2 over 74 LIWC categories and measured at 20% increments of the encounter duration. 8Calculated using the LSC metric 1432 Feature C T Counselor talk time based on syllable counting ✓ Length of conversation setter, length of setter response, ratio between setter and response ✓ Counselor turns, client turns ✓ ✓ Average words during client and counselor turns ✓ ✓ Ratio of counselor and client words in each turn ✓ ✓ Rate of verbal mirroring on each LIWC category 8 ✓ Table 3: Engagement features extracted at a) (C) conversation level, and b) (T) 20% increments of the conversation duration, in percentage of turns. Nonverbal accommodation: This set includes the counselor-leading and counselor-following synchrony scores, calculated as described in section 5.2, and evaluated at 20% increments of the encounter duration. Discourse topics: These features consist of the 14 topics identified in section 6 as frequently discussed during the MI encounters. The features are obtained by calculating the product of the principal components matrix and the binary documentterm matrix. Raw linguistic features: We extract a large set of linguistic features derived from the session transcript to model the counselor language. We include: unigrams and bigrams (ngrams), represented as a vector containing their frequencies in the session; psycholinguistic-inspired features that capture differences in semantic meaning (lexicons), represented as the total frequency counts of all the words in a lexicon-category that are present in the transcript; syntactic features that encode syntax patterns in the counselor statements (CFG), represented as a vector containing the frequency of lexicalized and unlexicalized production rules from the Context Free Grammar parse trees9 of each transcript. The final linguistic features set consists of 13,648 features. Raw acoustic features: This feature set includes a large number of speech features extracted with the OpenEar toolkit (Eyben et al., 2009). We use a predefined feature set, EmoLarge, which consists of a set of 6,552 features used for emotion recognition tasks. The features are derived from 25 lowlevel speech descriptors including intensity, loudness, 12 Mel frequency coefficients, pitch (F0), 9Extracted with the Stanford parser (Klein and Manning, 2003). Feature set Empathy F-score Acc. HE LE Linguistic Engagement 71.01% 0.80 0.40 Ling Accom 73.19% 0.82 0.44 Topics 75.72% 0.83 0.57 N-grams 78.62% 0.86 0.58 Lexicons 76.09% 0.84 0.55 CFG 76.09% 0.84 0.53 All linguistic 80.07% 0.87 0.62 Acoustic Nonverb Accom 64.86% 0.79 0.00 Raw acoustic 73.91% 0.82 0.53 All acoustic 75.72% 0.83 0.56 Linguistic+ Acoustic Ling+acoustic (early) 76.81% 0.84 0.56 Ling+acoustic (late) 79.35% 0.86 0.71 Table 4: Overall prediction results and F-scores for high empathy (HE) and low empathy (LE) using linguistic and acoustic feature sets. probability of voicing, F0 envelope, zero-crossing rate, and 8 line spectral frequencies. 7.2 Classification Results Classification results for each feature set are shown in Table 4. For the linguistic and acoustic modalities, almost all the feature sets provide classification accuracies above the baseline, with good F-scores for both high and low empathy. The only exception are the nonverbal accommodation features, which have an accuracy comparable to the baseline (64.86% vs. 64%). When combining all the feature sets for each modality, we observe performance gains in the range of 10 to 15%, as compared to the models that use one feature set at a time. We also conduct multimodal experiments where we combine linguistic and acoustic features using either early fusion by concatenating all the feature vectors, or late fusion by aggregating the outputs of each classifier using a rule-based score level fusion that assigns a weight of 0.8 to the linguistic classifier, and 0.2 to the acoustic classifier.10 Overall, the results show performance gains when using late fusion as compared to early fusion. While the late fusion model does not outperform the best linguistic model in terms of accuracy and high empathy F-score, the multimodal late fusion classifier has significantly better F-score performance in the classification of low empathy encounters, thus suggesting potential benefits of fus10Weights empirically determined on a development set by evaluating increments of 0.2 for each classifier weight. 1433 ing acoustic and linguistic cues during the prediction of counselor empathy. 8 Conclusions In this paper, we presented an extensive analysis of counselors and clients behaviors during MI encounters, and found significant differences in the way counselors and clients behave during high and low empathy encounters. We specifically explored the engagement, coordination, and discourse of counselors during MI interventions. Our main findings include: Engagement: Empathic counselors show more engagement during the conversation by a) showing levels of verbal interaction consistent with their client, and b) reducing their relative talking time with clients. Coordination: Empathic counselors match the linguistic style of their clients across the session, but maintain control of the conversation by coordinating less at immediate conversation turn-level. Conversation content: Empathic counselors use reflective language and talk about behavior change, while less empathic counselors persuade more and focus on client resistance toward change. The results of these analyses were used to build accurate counselor empathy classifiers that rely on linguistic and acoustic cues, with accuracies of up to 80%. In the future, we plan to build upon the acquired knowledge and the developed classifiers to create automatic tools that provide accurate evaluative feedback of counseling practice. Acknowledgments We are grateful to Prof. Berrin Yanikoglu for her very useful input on the machine learning framework. This material is based in part upon work supported by the University of Michigan under the M-Cube program, by the National Science Foundation (grant #1344257), the John Templeton Foundation (grant #48503), and the Michigan Institute for Data Science. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the University of Michigan, the National Science Foundation, the John Templeton Foundation, or the Michigan Institute for Data Science. References Ryan. L. Boyd. 2016. Meh: Meaning extraction helper (version 1.4.14) [software]. Leo Breiman. 2001. Random forests. Machine learning 45(1):5–32. Michael Brookes. 2003. Voicebox: Speech processing toolbox for matlab. Delwyn Catley, Kari J Harris, Kathy Goggin, Kimber Richter, Karen Williams, Christi Patten, Ken Resnicow, Edward Ellerbeck, Andrea Bradley-Ewing, Domonique Malomo, et al. 2012. Motivational interviewing for encouraging quit attempts among unmotivated smokers: study protocol of a randomized, controlled, efficacy trial. BMC public health 12(1):456. Cindy K Chung and James W Pennebaker. 2008. Revealing dimensions of thinking in open-ended self-descriptions: An automated meaning extraction method for natural language. Journal of Research in Personality 42(1):96–132. Thomas A D’Agostino and Carma L Bylund. 2014. Nonverbal accommodation in health care communication. Health communication 29(6):563–573. Cristian Danescu-Niculescu-Mizil, Michael Gamon, and Susan Dumais. 2011. Mark my words!: linguistic style accommodation in social media. In Proceedings of the 20th international conference on World Wide Web. ACM, pages 745–754. Cristian Danescu-Niculescu-Mizil, Lillian Lee, Bo Pang, and Jon Kleinberg. 2012. Echoes of power: Language effects and power differences in social interaction. In Proceedings of WWW. pages 699–708. Florian Eyben, Martin W¨ollmer, and Bj¨orn Schuller. 2009. Openear introducing the munich open-source emotion and affect recognition toolkit. In 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops. IEEE, pages 1–6. James Gibson, Nikolaos Malandrakis, Francisco Romero, David C Atkins, and Shrikanth Narayanan. 2015. Predicting therapist empathy in motivational interviews using language features inspired by psycholinguistic norms. In Sixteenth Annual Conference of the International Speech Communication Association. Kathy Goggin, Mary M Gerkovich, Karen B Williams, Julie W Banderas, Delwyn Catley, Jannette BerkleyPatton, Glenn J Wagner, James Stanford, Sally Neville, Vinutha K Kumar, et al. 2013. A randomized controlled trial examining the efficacy of motivational counseling with observed therapy for antiretroviral therapy adherence. AIDS and Behavior 17(6):1992–2001. 1434 Amy L Gonzales, Jeffrey T Hancock, and James W Pennebaker. 2009. Language style matching as a predictor of social dynamics in small groups. Communication Research . Dan Klein and Christopher D Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics-Volume 1. Association for Computational Linguistics, pages 423–430. Sarah Peregrine Lord, Elisa Sheng, Zac E Imel, John Baer, and David C Atkins. 2015. More than reflections: Empathy in motivational interviewing includes language style synchrony between therapist and client. Behavior therapy 46(3):296–303. Matthew Marge, Satanjeev Banerjee, Alexander Rudnicky, et al. 2010. Using the amazon mechanical turk for transcription of spoken language. In Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on. IEEE, pages 5270–5273. William R Miller and Stephen Rollnick. 2013. Motivational interviewing: Helping people change, Third edition. The Guilford Press. Theresa B. Manuel Jennifer K. Ernst Denise Moyers. 2014. Motivational Interviewing Treatment Integrity Coding Manual 4.1. Unpublished manual.. Theresa B Moyers and William R Miller. 2013. Is low therapist empathy toxic? Psychology of Addictive Behaviors 27(3):878. Kate G Niederhoffer and James W Pennebaker. 2002. Linguistic style matching in social interaction. Journal of Language and Social Psychology 21(4):337– 360. Ver´onica P´erez-Rosas, Rada Mihalcea, Kenneth Resnicow, Lawrence An, and Satinder Singh. 2016. Building a motivational interviewing dataset. In NAACL Workshop on Clinical Psychology. Nairan Ramirez-Esparza, Cindy K. Chung, Ewa Kacewicz, and James W. Pennebaker. 2008. The psychology of word use in depression forums in english and in spanish: Testing two text analytic approaches. In In Proc. ICWSM 2008. Christina Regenbogen, Daniel A Schneider, Andreas Finkelmeyer, Nils Kohn, Birgit Derntl, Thilo Kellermann, Raquel E Gur, Frank Schneider, and Ute Habel. 2012. The differential contribution of facial expressions, prosody, and speech content to empathy. Cognition & emotion 26(6):995–1014. Catherine M Reich, Jeffrey S Berman, Rick Dale, and Heidi M Levitt. 2014. Vocal synchrony in psychotherapy. Journal of Social and Clinical Psychology 33(5):481. Stephen Rollnick, William R Miller, Christopher C Butler, and Mark S Aloia. 2008. Motivational interviewing in health care: helping patients change behavior. COPD: Journal of Chronic Obstructive Pulmonary Disease 5(3):203–203. Yla R Tausczik and James W Pennebaker. 2010. The psychological meaning of words: Liwc and computerized text analysis methods. Journal of language and social psychology 29(1):24–54. Wayne F Velicer and Joseph L Fava. 1998. Affects of variable and subject sampling on factor pattern recovery. Psychological methods 3(2):231. Steven R. Wilson, Rada Mihalcea, Ryan L. Boyd, and James W. Pennebaker. 2016. Cultural influences on the measurement of personal values through words, AI Access Foundation, volume SS-16-01 - 07, pages 314–317. Markus Wolf, Cindy K Chung, and Hans Kordy. 2010. Inpatient treatment to online aftercare: e-mailing themes as a function of therapeutic outcomes. Psychotherapy Research 20(1):71–85. Bo Xiao, Daniel Bone, Maarten Van Segbroeck, Zac E Imel, David C Atkins, Panayiotis G Georgiou, and Shrikanth S Narayanan. 2014. Modeling therapist empathy through prosody in drug addiction counseling. In Fifteenth Annual Conference of the International Speech Communication Association. Bo Xiao, Dogan Can, Panayiotis G Georgiou, David Atkins, and Shrikanth S Narayanan. 2012. Analyzing the language of therapist empathy in motivational interview based psychotherapy. In Signal & Information Processing Association Annual Summit and Conference (APSIPA ASC), 2012 Asia-Pacific. IEEE, pages 1–4. Bo Xiao, Zac E Imel, Panayiotis G Georgiou, David C Atkins, and Shrikanth S Narayanan. 2015. ” rate my therapist”: Automated detection of empathy in drug and alcohol counseling via speech and language processing. PloS one 10(12):e0143055. 1435
2017
131
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1436–1446 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1132 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1436–1446 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1132 Leveraging Knowledge Bases in LSTMs for Improving Machine Reading Bishan Yang Machine Learning Department Carnegie Mellon University [email protected] Tom Mitchell Machine Learning Department Carnegie Mellon University [email protected] Abstract This paper focuses on how to take advantage of external knowledge bases (KBs) to improve recurrent neural networks for machine reading. Traditional methods that exploit knowledge from KBs encode knowledge as discrete indicator features. Not only do these features generalize poorly, but they require task-specific feature engineering to achieve good performance. We propose KBLSTM, a novel neural model that leverages continuous representations of KBs to enhance the learning of recurrent neural networks for machine reading. To effectively integrate background knowledge with information from the currently processed text, our model employs an attention mechanism with a sentinel to adaptively decide whether to attend to background knowledge and which information from KBs is useful. Experimental results show that our model achieves accuracies that surpass the previous state-of-the-art results for both entity extraction and event extraction on the widely used ACE2005 dataset. 1 Introduction Recurrent neural networks (RNNs), a neural architecture that can operate over text sequentially, have shown great success in addressing a wide range of natural language processing problems, such as parsing (Dyer et al., 2015), named entity recognition (Lample et al., 2016), and semantic role labeling (Zhou and Xu, 2015)). These neural networks are typically trained end-to-end where the input is only text or a sequence of words and a lot of background knowledge is disregarded. The importance of background knowledge in natural language understanding has long been recognized (Minsky, 1988; Fillmore, 1976). Earlier NLP systems mostly exploited restricted linguistic knowledge such as manually-encoded morphological and syntactic patterns. With the advanced development of knowledge base construction, large amounts of semantic knowledge become available, ranging from manually annotated semantic networks like WordNet 1 to semi-automatically or automatically constructed knowledge graphs like DBPedia 2 and NELL 3. While traditional approaches have exploited the use of these knowledge bases (KBs) in NLP tasks (Ratinov and Roth, 2009; Rahman and Ng, 2011; Nakashole and Mitchell, 2015), they require a lot of task-specific engineering to achieve good performance. One way to leverage KBs in recurrent neural networks is by augmenting the dense representations of the networks with the symbolic features derived from KBs. This is not ideal as the symbolic features have poor generalization ability. In addition, they can be highly sparse, e.g., using WordNet synsets can easily produce millions of indicator features, leading to high computational cost. Furthermore, the usefulness of knowledge features varies across contexts, as general KBs involve polysemy, e.g., “Clinton” can refer to a person or a town. Incorporating KBs irrespective of the textual context could mislead the machine reading process. Can we train a recurrent neural network that learns to adaptively leverage knowledge from KBs to improve machine reading? In this paper, we propose KBLSTM, an extension to bidirec1https://wordnet.princeton.edu 2http://wiki.dbpedia.org/ 3http://rtw.ml.cmu.edu/rtw/kbbrowser/ 1436 tional Long Short-Term Memory neural networks (BiLSTMs) (Hochreiter and Schmidhuber, 1997; Graves et al., 2005) that is capable of leveraging symbolic knowledge from KBs as it processes each word in the text. At each time step, the model retrieves KB concepts that are potentially related to the current word. Then, an attention mechanism is employed to dynamically model their semantic relevance to the reading context. Furthermore, we introduce a sentinel component in BiLSTMs that allows flexibility in deciding whether to attend to background knowledge or not. This is crucial because in some cases the text context should override the context-independent background knowledge available in general KBs. In this work, we leverage two general, readily available knowledge bases: WordNet (WordNet, 2010) and NELL (Mitchell et al., 2015). WordNet is a manually created lexical database that organizes a large number of English words into sets of synonyms (i.e. synsets) and records conceptual relations (e.g., hypernym, part of) among them. NELL is an automatically constructed webbased knowledge base that stores beliefs about entities. It is organized based on an ontology of hundreds of semantic categories (e.g., person, fruit, sport) and relations (e.g., personPlaysInstrument). We learn distributed representations (i.e., embeddings) of WordNet and NELL concepts using knowledge graph embedding methods. We then integrate these learned embeddings with the state vectors of the BiLSTM network to enable knowledge-aware predictions. We evaluate the proposed model on two core information extraction tasks: entity extraction and event extraction. For entity extraction, the model needs to recognize all mentions of entities such as person, organization, location, and other things from text. For event extraction, the model is required to identify event mentions or event triggers4 that express certain types of events, e.g., elections, attacks, and travels. Both tasks are challenging and often require the combination of background knowledge and the text context for accurate prediction. For example, in the sentence “Maigret left viewers in tears.”, knowing that “Maigret” can refer to a TV show can greatly help disambiguate its meaning. However, knowledge bases 4An event also consists of participants whose types depend on the event triggers. In this work, we focus on predicting event triggers and leave the prediction of event participants for future work. may hurt performance if used blindly. For example, in the sentence “Santiago is charged with murder.”, methods that rely heavily on KBs are likely to interpret “Santiago” as a location due to the popular use of Santiago as a city. Similarly for events, the same word can trigger different types of events, for example, “release” can be used to describe different events ranging from book publishing to parole. It is important for machine learning models to learn to decide which knowledge from KBs is relevant given the context. Extensive experiments demonstrate that our KBLSTM models effectively leverage background knowledge from KBs in training BiLSTM networks for machine reading. They achieve significant improvement on both entity and event extraction compared to traditional feature-based methods and LSTM networks that disregard knowledge in KBs, resulting in new state-of-the-art results for entity extraction and event extraction on the widely used ACE2005 dataset. 2 Related Work Essential to RNNs’ success on natural language processing is the use of Long Short-Term Memory neural networks (Hochreiter and Schmidhuber, 1997) (LSTMs) or Gated Recurrent Unit (Cho et al., 2014) (GRU) as they are able to handle longterm dependencies by adaptively memorizing values for either long or short durations. Their bidirectional variants BiLSTM (Graves et al., 2005) or BiGRU further allow the incorporation of both past and future information. Such ability has been shown to be generally helpful in various NLP tasks such as named entity recognition (Huang et al., 2015; Chiu and Nichols, 2016; Ma and Hovy, 2016), semantic role labeling (Zhou and Xu, 2015), and reading comprehension (Hermann et al., 2015; Chen et al., 2016). In this work, we also employ the BiLSTM architecture. In parallel to the development for text processing, neural networks have been successfully used to learn distributed representations of structured knowledge from large KBs (Bordes et al., 2011, 2013; Socher et al., 2013; Yang et al., 2015; Guu et al., 2015). Embedding the symbolic representations into continuous space not only makes KBs more easy to use in statistical learning approaches, but also offers strong generalization ability. Many attempts have been made on connecting distributed representations of KBs with text in the 1437 context of knowledge base completion (Lao et al., 2011; Gardner et al., 2014; Toutanova et al., 2015), relation extraction (Weston et al., 2013; Chang et al., 2014; Riedel et al., 2013), and question answering (Miller et al., 2016). However, these approaches model text using shallow representations such as subject/relation/object triples or bag of words. More recently, Ahn et al. (2016) proposed a neural knowledge language model that leverages knowledge bases in RNN language models, which allows for better representations of words for language modeling. Unlike their work, we leverage knowledge bases in LSTMs and applies it to information extraction. The architecture of our KBLSTM model draws on the development of attention mechanisms that are widely employed in tasks such as machine translation (Bahdanau et al., 2015) and image captioning (Xu et al., 2015). Attention allows the neural networks to dynamically attend to salient features of the input. With a similar motivation, we employ attention in KBLSTMs to allow for dynamic attention to the relevant knowledge given the text context. Our model is also closely related to a recent model of caption generation introduced by Lu et al. (2017), where a visual sentinel is introduced to allow the decoder to decide whether to attend to image information when generating the next word. In our model, we introduce a sentinel to control the tradeoff between background knowledge and information from the text. 3 Method In this section, we present our KBLSTM model. We first describe several basic recurrent neural network frameworks for machine reading, including basic RNNs, LSTMs, and bidirectional LSTMs (Sec. § 3.1). We then introduce our extension to bidirectional LSTMs that allows for the incorporation of KB information at each time step of reading (Sec. § 3.2). The KB information is encoded using continuous representations (i.e., embeddings) which are learned using knowledge embedding methods (Sec. § 3.3). 3.1 RNNs, LSTMs, and Bidirectional LSTMs RNNs are a class of neural networks that take a sequence of inputs and compute a hidden state vector at each time step based on the current input and the entire history of inputs. The hidden state vector can be computed recursively using the following equation (Elman, 1990): ht = F(Wht−1 + Uxt) where xt is the input at time step t, ht is the hidden state at time step t, U and W are weight matrices, and F is a nonlinear function such as tanh or ReLu. Depending on the applications, RNNs can produce outputs based on the hidden state of each time step or just the last time step. A Long Short-Term Memory network (Hochreiter and Schmidhuber, 1997) (LSTM) is a variant of RNNs which was design to better handle cases where the output at time t depends on much earlier inputs. It has a memory cell and three gating units: an input gate that controls what information to add to the current memory, a forget gate which controls what information to remove from the previous memory, and an output gate which controls what information to output from the current memory. Each gate is implemented as a logistic function σ that takes as input the previous hidden state and the current input, and outputs values between 0 and 1. Multiplication with these values controls the flow of information into or out of the memory. In equations, the updates at each time step t are: it = σ(Wiht−1 + Uixt) ft = σ(Wfht−1 + Ufxt) ot = σ(Woht−1 + Uoxt) ct = ft ⊙ct−1 + it ⊙tanh(Wcht−1 + Ucxt) ht = ot ⊙tanh(ct) where it is the input gate, ft is the forget gate, ot is the output gate, ct is the memory cell, and ht is the hidden state. ⊙denotes element-wise multiplication. Wi, Ui, Wf, Uf, Wo, Uo, Wc, Uc are weight matrices to be learned. Bidirectional LSTMs (Graves et al., 2005) (BiLSTMs) are essentially a combination of two LSTMs in two directions: one operates in the forward direction and the other operates in the backward direction. This leads to two hidden states −→ ht and ←− ht at time step t, which can be viewed as a summary of the past and the future respectively. Their concatenation ht = [−→ ht; ←− ht] provides a whole summary of the information about the input around time step t. Such property is attractive in NLP tasks, since information of both previous words and future words can be helpful for interpreting the meaning of the current word. 1438 Figure 1: Architecture of the KBLSTM model. As each time step t, the knowledge module retrieves a set of candidate KB concepts V (xt) that are related to the current input xt, and then computes a knowledge state vector mt that integrates the embeddings of the candidate KB concepts v1, v2, ..., vL and the current context vector st. See Section § 3.2 for details. 3.2 Knowledge-aware Bidirectional LSTMs Our model (referred to as KBLSTM) extends BiLSTMs to allow flexibility in incorporating symbolic knowledge from KBs. Instead of encoding knowledge as discrete features, we encode it using continuous representations. Concretely, we learn embeddings of concepts in KBs using a knowledge graph embedding method. (We will describe the details in Section § 3.3). The KBLSTM model then retrieves the embeddings of candidate concepts that are related to the current word being processed and integrates them into its state vector to make knowledge-aware predictions. Figure 1 depicts the architecture of our model. The core of our model is the knowledge module, which is responsible for transferring background knowledge into the BiLSTMs. The knowledge at time step t consists of candidate KB concepts V (xt) for input xt. (We will describe how to obtain the candidate KB concepts from NELL and WordNet in Section § 3.3). Each candidate KB concept i ∈V (xt) is associated with a vector embedding vi. We compute an attention weight αti for concept vector vi via a bilinear operator, which reflects how relevant or important concept i is to the current reading context ht: αti ∝exp(vT i Wvht) (1) where Wv is a parameter matrix to be learned. Note that the candidate concepts in some cases are misleading. For example, a KB may store the fact that “Santiago” is a city but miss the fact that it can also refer to a person. Incorporating such knowledge in the sentence “Santiago is charged with murder.” could be misleading. To address this issue, we introduce a knowledge sentinel that records the information of the current context and use a mixture model to allow for better tradeoff between the impact of background knowledge and information from the context. Specifically, we compute a sentinel vector st as: bt = σ(Wbht−1 + Ubxt) (2) st = bt ⊙tanh(ct) (3) where Wb and Ub are weight parameters to be learned. The weight on the local context is computed as: βt ∝exp(sT t Wsht) (4) where Ws is a parameter matrix to be learned. The mixture model is defined as: mt = X i∈V (xt) αtivi + βtst (5) where P i∈V (xt) αti+βt = 1. mt can be viewed as a knowledge state vector that encodes external KB information with respect to the input at time t. We combine it with the state vector ht of BiLSTMs to obtain a knowledge-aware state vector ˆht: ˆht = ht + mt (6) If V (xt) = ∅, we set mt = 0. ˆht can be used for predictions in the same way as the original state vector ht (see Section § 4 for details). 3.3 Embedding Knowledge Base Concepts Now we describe how to learn embeddings of concepts in KBs. We consider two KBs: WordNet and NELL, which are both knowledge graphs that can be stored in the form of RDF5 triples. Each triple consists of a subject entity, a relation, and an object entity. Examples of triples in WordNet are (location, hypernym of, city), and (door, has part, lock), where both the subject and object entities are synsets in WordNet. Examples of triples in NELL are (New York, located in, United States) 5https://www.w3.org/TR/rdf11-concepts/ 1439 and (New York, is a, city), where the subject entity is a noun phrase that can refer to a real-world entity and the object entity can be either a noun phrase entity or a concept category. In this work, we refer to the synsets in WordNet and the concept categories in NELL as KB concepts. They are the key components of the ontologies and provide generally useful information for language understanding. As our KBLSTM model reads through each word in a sentence, it retrieves knowledge from NELL by searching for entities with the current word and collecting the related concept categories as candidate concepts; and it retrieves knowledge from WordNet by treating the synsets of the current word as candidate concepts. We employ a knowledge graph embedding approach to learn the representations of the candidate concepts. Denote a KB triple as (e1, r, e2), we want to learn embeddings of the subject entity e1, the object entity e2, and the relation r, so that the relevance of the triple can be measured by a scoring function based on the embeddings. We employ the BILINEAR model described in (Yang et al., 2015).6 It computes the score of a triple (e1, r, e2) via a bilinear function: S(e1,r,e2) = vT e1Mrve2, where ve1 and ve2 are vector embeddings for e1 and e2 respectively, and Mr is a relation-specific embedding matrix. We train the embeddings using the max-margin ranking objective: X q=(e1,r,e2)∈T X q′=(e1,r,e′ 2)∈T ′ max{0, 1 −Sq + Sq′} (7) where T denotes the set of triples in the KB and T ′ denotes the “negative” triples that are not observed in the KB. For WordNet, we train the concept embeddings using the preprocessed data provided by (Bordes et al., 2013), which contains 151,442 triples with 40,943 synsets and 18 relations. We use the same data splits for training, development, and testing. During training, we use AdaGrad (Duchi et al., 2011) to optimize objective 7 with an initial learning rate of 0.05 and a mini-batch size of 100. At each gradient step, we sample 10 negative object entities with respect to the positive triple. Our implementation of the BILINEAR model achieves top-10 accuracy of 91% for predicting missing ob6We also experimented with TransE (Bordes et al., 2013) and NTN (Socher et al., 2013), and found that they perform significantly worse than the Bilinear method, especially on predicting the “is a” facts in NELL. ject entities on the WordNet test set, which is comparable with previous work (Yang et al., 2015). For NELL, we train the concept embeddings using a subset of the NELL data7. We filter noun phrases with annotation confidence less than 0.9 in order to remove erroneous labels introduced during the automatic construction of NELL (Wijaya, 2016). This results in 180,107 noun phrases and 258 concept categories in total. We randomly split 80% of the data for training, 10% for development and 10% for testing. For each training example, we enumerate all the unobserved concept categories as negative labels. Instead of treating each entity as a unit, we represent it as an average of its constituting word vectors concatenated with its head word vector. The word vectors are initialized with pre-trained paraphrastic embeddings (Wieting et al., 2015) and fine-tuned during training. Using the same optimization parameters as before, the BILINEAR model achieves 88% top-1 accuracy for predicting concept categories of given noun phrases on the test set. 4 Experiments 4.1 Entity Extraction We first apply our model to entity extraction, a task that is typically addressed by assigning each word/token BIO labels (Begin, Inside, and Outside) (Ratinov and Roth, 2009) indicating the token’s position within an entity mention as well as its entity type. To allow tagging over phrases instead of words, we address entity extraction in two steps. The first step detects mention chunks, and the second step assigns entity type labels to mention chunks (including an O type indicating other types). In the first stage, we train a BiLSTM network with a conditional random field objective (Huang et al., 2015) using gold-standard BIO labels regardless of entity types, and only predict each token’s position within an entity mention. This produces a sequence of chunks for each sentence. In the second stage, we train another supervised BiLSTM model to predict type labels for the previously extracted chunks. Each chunk is treated as a unit input to the BiLSTMs and the input vector is computed by averaging the input vectors of individual words within the chunk concatenated with its head word vector. The BiLSTMs can be trained 7http://rtw.ml.cmu.edu/rtw/resources 1440 with a softmax objective that minimizes the crossentropy loss for each individual chunk. It computes the probability of the correct type for each chunk: pyt = exp(wT ytht) P y′ t exp(wT y′ tht) (8) The BiLSTMs can also be trained with a CRF objective (referred to as BiLSTM-CRF) that minimizes the negative log-likelihood for the entire sequence. It computes the probability of the correct types for a sequence of chunks: py = exp(g(x, y)) P y′ exp(g(x, y′)) (9) where g(x, y) = Pl t=1 Pt,yt + Pl t=0 Ayt,yt+1, Pt,yt = wT ytht is the score of assigning the t-th chunk with tag yt and Ayt,yt+1 is the score of transitioning from tag yt to yt+1. By replacing ht in Eq. 8 and Eq. 9 with the knowledge-aware state vector ˆht (Eq. 6), we can compute the objective for KBLSTM and KBLSTM-CRF respectively. 4.1.1 Implementation Details We evaluate our models on the ACE2005 corpus (LDC, 2005) and the OntoNotes 5.0 corpus (Hovy et al., 2006) for entity extraction. Both datasets consist of text from a variety of sources such as newswire, broadcast conversations, and web text. We use the same data splits and task settings for ACE2005 as in Li et al. (2014) and for OntoNotes 5.0 as in Durrett and Klein (2014). At each time step, our models take as input a word vector and a capitalization feature (Chiu and Nichols, 2016). We initialize the word vectors using pretrained paraphrastic embeddings (Wieting et al., 2015), as we find that they significantly outperforms randomly initialized embeddings. The word embeddings are fine-tuned during training. For the KBLSTM models, we obtain the embeddings of KB concepts from NELL and WordNet as described in Section § 3.3. These embeddings are kept fix during training. We implement all the models using Theano on a single GPU. We update the model parameters on every training example using Adam with default settings (Kingma and Ba, 2014) and apply dropout to the input layer of the BiLSTM with a rate of 0.5. The word embedding dimension is set to 300 and the hidden vector dimension is set to 100. We train models on ACE2005 for about 5 epochs and Model P R F1 BiLSTM 83.5 86.4 84.9 BiLSTM-CRF 87.3 84.7 86.0 BiLSTM-Fea 86.1 84.7 85.4 BiLSTM-Fea-CRF 87.7 86.1 86.9 KBLSTM 87.8 86.6 87.2 KBLSTM-CRF 88.1 87.8 88.0∗ Table 1: Entity extraction results on the ACE2005 test set with gold-standard mention boundaries. on OntoNotes 5.0 for about 10 epochs with early stopping based on development results. For each experiment, we report the average results over 10 random runs. We also apply the Wilcoxon rank sum test to compare our models with the baseline models. 4.1.2 Results We compare our KBLSTM and KBLSTM-CRF models with the following baselines: BiLSTM, a vanilla BiLSTM network trained using the same input, and BiLSTM-Fea, a BiLSTM network that combines its hidden state vector with discrete KB features (i.e., indicators of candidate KB concepts) to produce the final state vector. We also include their variants BiLSTM-CRF and BiLSTMFea-CRF, which are trained using the CRF objective instead of the standard softmax objective. We first report results on entity extraction with gold-standard boundaries for multi-word mentions. This allows us to focus on the performance of entity type prediction without considering mention boundary errors and the noise they introduce in retrieving candidate KB concepts. Table 1 shows the results.8 We find that the CRF objective generally outperforms the softmax objective. Our KBLSTM-CRF model significantly improves over its counterpart BiLSTM-Fea-CRF. This demonstrates the effectiveness of KBLSTM architecture in leveraging KB information. Table 2 breaks down the results of the KBLSTM-CRF and the BiLSTM-Fea-CRF using different KB settings. We find that the KBLSTMCRF outperforms the BiLSTM-Fea-CRF in all the settings and that incorporating both KBs leads to the best performance. Next, we evaluate our models on entity extraction with predicted mention boundaries. We first train a BiLSTM-CRF to perform mention 8∗indicates p < 0.05 when comparing to the BiLSTMbased models. 1441 Model KB P R F1 BiLSTM-Fea-CRF NELL 87.2 86.1 86.6 WordNet 86.4 86.0 86.2 Both 87.7 86.1 86.9 KBLSTM-CRF NELL 87.4 87.6 87.5 WordNet 87.1 87.4 87.3 Both 88.1 87.8 88.0 Table 2: Ablation results with different KBs. chunking using the same setting as described in Section 4.1.1. We then treat the predicted chunks as units for entity type labeling. Table 3 reports the full entity extraction results on the ACE2005 test set. We compare our models with the state-of-the-art feature-based linear models Li et al. (2014), Yang and Mitchell (2016), and the recently proposed sequence- and tree-structured LSTMs (Miwa and Bansal, 2016). Interestingly, we find that using BiLSTM-CRF without any KB information already gives strong performance compared to previous work. The KBLSTM-CRF model demonstrates the best performance among all the models and achieves the new state-of-theart performance on the ACE2005 dataset. We also report the entity extraction results on the OntoNotes 5.0 test set in Table 4. We compare our models with the existing feature-based models Ratinov and Roth (2009) and Durrett and Klein (2014), which both employ heavy feature engineering to bring in external knowledge. BiLSTMCNN (Chiu and Nichols, 2016) employs a hybrid BiLSTM and Convolutional neural network (CNN) architecture and incorporates rich lexicon features derived from SENNA and DBPedia. Our KBLSTM-CRF model shows competitive results compared to their results. It also presents significant improvements compared to the BiLSTM and BiLSTM-Fea models. Note that the benefit of adding KB information is smaller on OntoNotes compared to ACE2005. We believe that there are two main reasons. One is that NELL has a lower coverage of entity mentions in OntoNotes than in ACE2005 (57% vs. 65%). Second, OntoNotes has a significantly larger amount of training data, which allows for more accurate models without much help from external resources. 4.2 Event Extraction We now apply our model to the task of event extraction. Event extraction is concerned with deModel P R F1 Li and Ji (2014) 85.2 76.9 80.8 Yang and Mitchell (2016) 83.5 80.2 81.8 Miwa and Bansal (2016) 82.9 83.9 83.4 BiLSTM 82.5 83.1 82.8 BiLSTM-CRF 84.6 82.5 83.6 BiLSTM-Fea 84.3 83.2 83.7 BiLSTM-Fea-CRF 84.7 83.5 84.1 KBLSTM 85.5 85.2 85.3 KBLSTM-CRF 85.4 86.0 85.7∗ Table 3: Entity extraction results on the ACE2005 test set. Model P R F1 Ratinov and Roth (2009) 82.0 84.9 83.4 Durrett and Klein (2014) 85.2 82.8 84.0 BiLSTM-CNN 82.5 82.4 82.5 BiLSTM-CNN+emb 85.9 86.3 86.1 BiLSTM-CNN+emb+lexicon 86.0 86.5 86.2 BiLSTM 84.9 86.3 85.6 BiLSTM-CRF 85.3 86.6 85.9 BiLSTM-Fea 85.2 86.4 85.8 BiLSTM-Fea-CRF 85.2 86.8 86.0 KBLSTM 86.3 86.2 86.2 KBLSTM-CRF 86.1 86.8 86.4∗ Table 4: Entity extraction results on the OntoNotes 5.0 test set. tecting event triggers, i.e., a word that expresses the occurrence of a pre-defined type of event. Event triggers are mostly verbs and eventive nouns but can occasionally be adjectives and other content words. Therefore, the task is typically addressed as a classification problem where the goal is to label each word in a sentence with an event type or an O type if it does not express any of the defined events. It is straightforward to apply the BiLSTM architecture to event extraction. Similarly to the models for entity extraction, we can train the BiLSTM network with both the softmax objective and the CRF objective. We evaluate our models on the portion ACE2005 corpus that has event annotations. We use the same data split and experimental setting as in Li et al. (2013). The training procedure is the same as in Section 4.1.1, and we train all the models for about 5 epochs. For the KBLSTM models, we integrate the learned embeddings of WordNet synsets during training. 1442 (a) The X-axis represents relevant NELL concepts for the entity mention clinton. The Y-axis represents the concept weights and the knowledge sentinel weight. (b) The X-axis represents relevant WordNet concepts for the event trigger head. The Y-axis represents the concept weights and the knowledge sentinel weight. Figure 2: Visualization of the attention weights for KB features learned by KBLSTM-CRF. Higher weights imply higher importance. Model P R F1 JOINTBEAM 74.0 56.7 64.2 JOINTEVENTENTITY 75.1 63.3 68.7 DMCNN 71.8 63.8 69.0 JRNN 66.0 73.0 69.3 BiLSTM 71.3 59.3 64.7 BiLSTM-CRF 64.2 66.6 65.4 BiLSTM-Fea 68.4 62.7 65.5 BiLSTM-Fea-CRF 65.5 66.7 66.1 KBLSTM 70.1 67.3 68.7 KBLSTM-CRF 71.6 67.8 69.7∗ Table 5: event extraction results on the ACE2005 test set. 4.2.1 Results We compare our models with the prior state-ofthe-art approaches for event extraction, including neural and non-neural ones: JOINTBEAM refers to the joint beam search approach with local and global features (Li et al., 2013); JOINTENTITYEVENT refers to the graphical model for joint entity and event extraction (Yang and Mitchell, 2016); DMCNN is the dynamic multi-pooling CNNs in Chen et al. (2015); and JRNN is an RNN model with memory introduced by Nguyen et al. (2016). The first block in Table 5 shows the results of the feature-based linear models (taken from Yang and Mitchell (2016)). The second block shows the previously reported results for the neural models. Note that they both make use of gold-standard entity annotations. The third block shows the results of our models. We can see that our KBLSTM models significantly outperform the BiLSTM and BiLSTM-Fea models, which again confirms their effectiveness in leveraging KB information. The KBLSTM-CRF model achieves the best performance and outperforms the previous state-of-the-art methods without having access to any gold-standard entities. 4.3 Model Analysis In order to better understand our model, we visualize the learned attention weights α for KB concepts and the sentinel weight β that measures the tradeoff between knowledge and context. Figure 2a visualizes these weights for the entity mention “clinton”. In the first sentence, “clinton” refers to a LOCATION while in the second sentence, “clinton” refers to a PERSON. Our model learns to attend to different word senses for ’clinton’ (KB concepts associated with ’clinton’) in different sentences. Note that the weight on the knowledge sentinel is higher in the first sentence. This is because the local text alone is indicative of the entity type for “clinton”: the word “in” is more likely to be followed by a location than a person. We find that BiLSTM-Fea-CRF models often make wrong predictions on examples like this due to its inflexibility in modeling knowledge relevance with respect to context. Figure 2b shows the learned weights for the event trigger word “head” in two sentences, one expresses a TRAVEL event while the other expresses a STARTPOSITION event. Again, we find that our model is capable of attending to relevant WordNet synsets and more accurately disambiguate event types. 1443 5 Conclusion In this paper, we introduce the KBLSTM network architecture as one approach to incorporating background KBs for improving recurrent neural networks for machine reading. This architecture employs an adaptive attention mechanism with a sentinel that allows for learning an appropriate tradeoff for blending knowledge from the KBs with information from the currently processed text, as well as selecting among relevant KB concepts for each word (e.g., to select the correct semantic categories for “clinton” as a town or person in figure 2a). Experimental results show that our model achieves state-of-the-art performance on standard benchmarks for both entity extraction and event extraction. We see many additional opportunities to integrate background knowledge with training of neural network models for language processing. Though our model is evaluated on entity extraction and event extraction, it can be useful for other machine reading tasks. Our model can also be extended to integrate knowledge from a richer set of KBs in order to capture the diverse variety and depth of background knowledge required for accurate and deep language understanding. Acknowledgments This research was supported in part by DARPA under contract number FA8750-13-2-0005, and by NSF grants IIS-1065251 and IIS-1247489. We also gratefully acknowledge the support of the Microsoft Azure for Research program and the AWS Cloud Credits for Research program. In addition, we would like to thank anonymous reviewers for their helpful comments. References Sungjin Ahn, Heeyoul Choi, Tanel P¨arnamaa, and Yoshua Bengio. 2016. A neural knowledge language model. arXiv preprint arXiv:1608.00318 . Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Representations (ICLR). Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in Neural Information Processing Systems (NIPS). pages 2787–2795. Antoine Bordes, Jason Weston, Ronan Collobert, and Yoshua Bengio. 2011. Learning structured embeddings of knowledge bases. In Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence. Kai-Wei Chang, Wen-tau Yih, Bishan Yang, and Christopher Meek. 2014. Typed tensor decomposition of knowledge bases for relation extraction. In Empirical Methods in Natural Language Processing (EMNLP). Danqi Chen, Jason Bolton, and Christopher D Manning. 2016. A thorough examination of the cnn/daily mail reading comprehension task. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL). Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multi-pooling convolutional neural networks. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). pages 167–176. Jason PC Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. Transactions of the Association for Computational Linguistics 4:357–370. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12(Jul):2121–2159. Greg Durrett and Dan Klein. 2014. A joint model for entity analysis: Coreference, typing, and linking. Transactions of the Association for Computational Linguistics 2:477–490. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Jeffrey L Elman. 1990. Finding structure in time. Cognitive science 14(2):179–211. Charles J Fillmore. 1976. Frame semantics and the nature of language. Annals of the New York Academy of Sciences 280(1):20–32. Matt Gardner, Partha Pratim Talukdar, Jayant Krishnamurthy, and Tom Mitchell. 2014. Incorporating vector space similarity in random walk inference over knowledge bases. In Empirical Methods in Natural Language Processing (EMNLP). 1444 Alex Graves, Santiago Fern´andez, and J¨urgen Schmidhuber. 2005. Bidirectional lstm networks for improved phoneme classification and recognition. In International Conference on Artificial Neural Networks. Springer, pages 799–804. Kelvin Guu, John Miller, and Percy Liang. 2015. Traversing knowledge graphs in vector space. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems (NIPS). pages 1693–1701. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. Ontonotes: the 90% solution. In Proceedings of the human language technology conference of the NAACL, Companion Volume: Short Papers. pages 57–60. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991 . Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. Proceedings of the International Conference on Learning Representations (ICLR) . Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). Ni Lao, Tom Mitchell, and William W Cohen. 2011. Random walk inference and learning in a large scale knowledge base. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). pages 529–539. LDC. 2005. The ace 2005 evaluation plan. In NIST. Qi Li and Heng Ji. 2014. Incremental joint extraction of entity mentions and relations. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). pages 402–412. Qi Li, Heng Ji, Yu Hong, and Sujian Li. 2014. Constructing information networks using one single model. In Empirical Methods in Natural Language Processing (EMNLP). pages 1846–1851. Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global features. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). pages 73–82. Jiasen Lu, Caiming Xiong, Devi Parikh, and Richard Socher. 2017. Knowing when to look: Adaptive attention via a visual sentinel for image captioning. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL). Alexander Miller, Adam Fisch, Jesse Dodge, AmirHossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-value memory networks for directly reading documents. In Empirical Methods in Natural Language Processing (EMNLP). Marvin Minsky. 1988. Society of mind. Simon and Schuster. T. Mitchell, W. Cohen, E. Hruschka, P. Talukdar, J. Betteridge, A. Carlson, B. Dalvi, M. Gardner, B. Kisiel, J. Krishnamurthy, N. Lao, K. Mazaitis, T. Mohamed, N. Nakashole, E. Platanios, A. Ritter, M. Samadi, B. Settles, R. Wang, D. Wijaya, A. Gupta, X. Chen, A. Saparov, M. Greaves, and J. Welling. 2015. Never-ending learning. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI). Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using lstms on sequences and tree structures. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL) . Ndapandula Nakashole and Tom M Mitchell. 2015. A knowledge-intensive model for prepositional phrase attachment. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). pages 365–375. Thien Huu Nguyen, Kyunghyun Cho, and Ralph Grishman. 2016. Joint event extraction via recurrent neural networks. In North American Chapter of the Association for Computational Linguistics (NAACLHLT). pages 300–309. Altaf Rahman and Vincent Ng. 2011. Coreference resolution with world knowledge. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics (ACL). pages 814–824. Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL). pages 147–155. Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Proceedings of HLT-NAACL. 1445 Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In Advances in Neural Information Processing Systems (NIPS). pages 926–934. Kristina Toutanova, Danqi Chen, Patrick Pantel, Pallavi Choudhury, and Michael Gamon. 2015. Representing text for joint embedding of text and knowledge bases. In Association for Computational Linguistics (ACL). Jason Weston, Antoine Bordes, Oksana Yakhnenko, and Nicolas Usunier. 2013. Connecting language and knowledge bases with embedding models for relation extraction. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP). John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. Towards universal paraphrastic sentence embeddings. Proceedings of the International Conference on Learning Representations (ICLR) . Derry Tanti Wijaya. 2016. VerbKB: A Knowledge Base of Verbs for Natural Language Understanding. Ph.D. thesis, Carnegie Mellon University. WordNet. 2010. About wordnet. http://wordnet.princeton.edu. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard S Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In International Conference for Machine Learning (ICML). Bishan Yang and Tom Mitchell. 2016. Joint extraction of events and entities within a document context. In North American Chapter of the Association for Computational Linguistics (NAACL-HLT). pages 289–299. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. International Conference on Learning Representations (ICLR) . Jie Zhou and Wei Xu. 2015. End-to-end learning of semantic role labeling using recurrent neural networks. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). 1446
2017
132
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1447–1456 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1133 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1447–1456 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1133 Prerequisite Relation Learning for Concepts in MOOCs Liangming Pan, Chengjiang Li, Juanzi Li∗and Jie Tang Knowledge Engineering Laboratory Department of Computer Science and Technology Tsinghua University, Beijing 100084, China (∗corresponding author) {panlm14@mails,licj17@mails,lijuanzi,tangjie}tsinghua.edu.cn Abstract What prerequisite knowledge should students achieve a level of mastery before moving forward to learn subsequent coursewares? We study the extent to which the prerequisite relation between knowledge concepts in Massive Open Online Courses (MOOCs) can be inferred automatically. In particular, what kinds of information can be leveraged to uncover the potential prerequisite relation between knowledge concepts. We first propose a representation learning-based method for learning latent representations of course concepts, and then investigate how different features capture the prerequisite relations between concepts. Our experiments on three datasets form Coursera show that the proposed method achieves significant improvements (+5.9-48.0% by F1-score) comparing with existing methods. 1 Introduction Mastery learning was first formally proposed by Benjamin Bloom in 1968 (Bloom, 1981), suggesting that students must achieve a level of mastery (e.g., 90% on a knowledge test) in prerequisite knowledge before moving forward to learn subsequent knowledge concepts. From then on, prerequisite relations between knowledge concepts become a cornerstone for designing curriculum in schools and universities. Prerequisite relations essentially can be considered as the dependency among knowledge concepts. It is crucial for people to learn, organize, apply, and generate knowledge (Laurence and Margolis, 1999). Figure 1 shows a real example from Coursera. The student wants to learn “Conditional Random Field” (in video18 of CS229). The prerequisite knowledge might be “Hidden Markov Model” (in video25 of Figure 1: An example of prerequisite relations in MOOCs CS224), whose prerequisite knowledge is “Maximum Likelihood” (in video12 of Math112). Organizing the knowledge structure with prerequisite relations in education improves tasks such as curriculum planning (Yang et al., 2015), automatic reading list generation (Jardine, 2014), and improving education quality (Rouly et al., 2015). For example, as shown in Figure 1, with explicit prerequisite relations among concepts (in red), a coherent and reasonable learning sequence can be recommended to the student (in blue). Before, the prerequisite relationships were provided by teachers or teaching assistants (Novak, 1990); however in the era of MOOCs, it is becoming infeasible as the teachers would find that they are facing with hundreds of thousands of students with various background. Meanwhile, the rapid growth of Massive Open Online Courses has offered thousands of courses, and students are free to choose any course from the thousands of candidates. Therefore, there is a clear need for methods to automatically dig out the prerequisite relationships among knowledge concepts from the large course space, so that the students from different background can easily explore the knowledge space and better design their personalized learning schedule. There are a few efforts aiming to automatically detect prerequisite relations for knowledge base. For example, Talukdar and Cohen (2012) proposed a method for inferring prerequisite relationships between entities in Wikipedia and Liang et al. (2015) presented a more general approach 1447 to predict prerequisite relationships. A few other works intend to extract prerequisite relationships from textbooks (Yosef et al., 2011; Wang et al., 2016). However, it is far from sufficient to directly apply these methods to the MOOC environments due to the following reasons. First, the focus of most previous attempts has been on prerequisite inference of Wikipedia concepts (either Wikipedia articles or Wikipedia concepts in textbooks). Many course concepts are not included in Wikipedia (Schweitzer, 2008; Okoli et al., 2014). We can leverage Wikipedia, in particular the existing entity relationships in Wikipedia, but cannot only rely on Wikipedia for detecting prerequisite relations in MOOCs. Second, with the thousands of courses from different universities and also very different disciplinaries, the MOOC scenario is much more complicated — there are not only inter-course concept relationships, but also intracourse and even intra-disciplinary relationships. Moreover, user interactions with the MOOC system might be also helpful to identify the prerequisite relations. How to fully leverage the different information to obtain a better performance for inferring prerequisite relations in MOOCs is a challenging issue. In this paper, we attempt to figure out what kinds of information in MOOCs can be used to uncover the prerequisite relations among concepts. Specifically, we consider it from three aspects, including course concept semantics, course video context and course structure. First, semantic relatedness plays an important role in prerequisite relations between concepts. If two concepts have very different semantic meanings (e.g., “matrix” and “anthropology”), it is unlikely that they have prerequisite relations. However, statistical features in MOOCs do not provide sufficient information for capturing the concept semantics because of the short length of course videos in MOOCs, we propose an embedding-based method to incorporate external knowledge from Wikipedia to learn semantic representations of concepts in MOOCs. Based on it, we propose one semantic feature to calculate the semantic relatedness between concepts. Second, motivated by the reference distance (RefD) (Liang et al., 2015), we propose three new contextual features, i.e., Video Reference Distance, Sentence Reference Distance and Wikipedia Reference Distance, to infer prerequisite relations in MOOCs based on context information from different aspects, which are more general and informative than RefD and overcome its sparsity problem. Third, we examine different distributional patterns for concepts in MOOCs, including appearing position, distributional asymmetry, video coverage and survival time. We further propose three structural features to utilize these patterns to help prerequisite inference in MOOCs. To evaluate the proposed method, we construct three datasets, each of which consists of multiple real courses in a specific domain from Coursera 1, the largest MOOC platform in the world. We also compare our method with the representative works of prerequisite learning and make a deep analysis of the feature contribution proposed in the paper. The experimental results show that our method achieves the state-of-the-art results in the prerequisite relation discovery in MOOCs. In summary, our contributions include: a) the first attempt, to the best of our knowledge, to detect prerequisite relations among concepts in MOOCs; b) proposal of a set of novel features that utilize contextual, structural and semantic information in MOOCs to identify prerequisite relations; c) design of three useful datasets based on real courses of Coursera to evaluate our method. 2 Problem Formulation In this section, we first give some necessary definitions and then formulate the problem of prerequisite relation learning in MOOCs. A MOOC corpus is composed by n courses in the same subject area, denoted as D = {C1, · · · , Ci, · · · , Cn}, where Ci is one course. Each course C can be further represented as a video sequence C = (V1, · · · , Vi, · · · , V|C|), where Vi denotes the i-th teaching video of course C. Finally, we view each video V as a document of its video texts (video subtitles or speech script), i.e., V = (s1 · · · si · · · s|V|), where si is the i-th sentence of the video texts. Course concepts are subjects taught in the course, i.e., the concepts not only mentioned but also discussed and taught in the course. Let us denote the course concept set of D as K = K1 ∪ · · · ∪Kn, where Ki is the set of course concepts in Ci. Prerequisite relation learning in MOOCs is formally defined as follows. Given a MOOC corpus D and its corresponding course concepts 1https://www.coursera.org/ 1448 K, the objective is to learn a function P : K2 → {0, 1} that maps a concept pair ⟨a, b⟩, where a, b ∈ K, to a binary class that predicts whether a is a prerequisite concept of b. In order to learn this mapping, we need to answer two crucial questions. How could we represent a course concept? What information regarding a concept pair is helpful to capture their prerequisite relation? We first propose an embedding-based method to learn appropriate semantic representations for each course concept in K. Based on the learned representations, we propose 7 novel features to capture whether a concept pair has prerequisite relation. These features utilize different aspects of information and can be classified into 1 semantic feature, 3 contextual features and 3 structural features. In the following section, we first describe the semantic representations in detail, and then formally introduce our proposed features. 3 Method 3.1 Concept Representation & Semantic Relatedness We first learn appropriate representations for course concepts. Given the course concepts K as input, we utilize a Wikipedia corpus to learn semantic representations for concepts in K. A Wikipedia corpus W is a set of Wikipedia articles and can be represented as a sequence of words W = ⟨w1 · · · wi · · · wm⟩, where wi denotes a word and m is the length of the word sequence. Our method consists of two steps: (1) entity annotation, and (2) representation learning. Entity Annotation. We first automatically annotate the entities in W to obtain an entity set E and an entity-annotated Wikipedia corpus W′ = ⟨x1 · · · xi · · · xm′⟩, where xi corresponds to a word w ∈W or an entity e ∈E. Note that m′ < m because multiple adjacent words could be labeled as one entity. Many entity linking tools are available for entity annotation, e.g. TAGME (Ferragina and Scaiella, 2010), AIDA (Yosef et al., 2011) and TremenRank (Cao et al., 2015). However, the rich hyperlinks created by Wiki editors provide a more natural way. In our experiments, we simply use the hyperlinks in Wikipedia articles as annotated entities. Representation Learning. We then learn word embeddings (Mikolov et al., 2013b,a) on W′ to obtain low-dimensional, real-valued vector representations for entities in E and words in W. Let us denote ve and vw as the vector of e ∈E and w ∈W, respectively. For a course concept a ∈K, suppose a is a N-gram term ⟨g1 · · · gN⟩ and g1, · · · , gN ∈W, we obtain its semantic representations va as follows. va = ( ve, if a ≡e and e ∈E vg1 + · · · + vgN , otherwise (1) It means that if a is a Wikipedia entity, we can directly obtain its semantic representations; otherwise, we obtain its vector via the vector addition of its individual word vectors. In this way, a has no corresponding vector only if any of its constituent word is absence in the whole Wikipedia corpus. This case is unusual because a large online encyclopedia corpus can easily cover almost all individual words of the vocabulary. Our experimental results verify that over 98% of the course concepts have vector representations. Feature 1: Semantic Relatedness For a given concept pair ⟨a, b⟩, the semantic relatedness between a and b, denoted as ω(a, b), is our first feature (the only semantic feature). With learned semantic representations, semantic relatedness of two concepts can be easily reflected by their distance in the vector space. We define ω(a, b) ∈[0, 1] as the normalized cosine distance between va and vb, as follows. ω(a, b) = 1 2(1 + va · vb ∥va∥· ∥vb∥) (2) 3.2 Contextual Features Context information in course videos provides important clues to infer prerequisite relations. In videos where concept A is taught, if the teacher also mentions concept B for a lot but not vice versa, then B is more likely to be a prerequisite of A than A of B. For example, “gradient descent” is a prerequisite concept of “back propagation”. In teaching videos of “back propagation”, the concept “gradient descent” is frequently mentioned when illustrating the optimization detail of back propagation. On the contrary, however, “back propagation” is unlikely to be mentioned when teaching “gradient descent”. A similar observation also exists in Wikipedia, based on which Liang et al. (2015) proposed an indicator, namely reference distance (RefD), to infer prerequisite relations among Wikipedia articles. However, RefD is computed based on the link structure of Wikipedia, thus is only feasible for Wikipedia 1449 concepts and is not applicable in plain text. We overcome the above shortcomings of RefD to propose three novel features, which utilize different aspects of context information—course videos, video sentences and Wikipedia articles—to infer prerequisite relations in MOOCs. Feature 2: Video Reference Distance Given a concept pair ⟨a, b⟩where a, b ∈K, we propose the video reference weight (V rw) to quantify how b is referred by videos of a, defined as follows. V rw (a, b) = P C∈D P V∈C f (a, V) · r (V, b) P C∈D P V∈C f (a, V) (3) where f (a, V) indicates the term frequency of concept a in video V, which reflects how important is concept a to this video. r (V, b) ∈ {0, 1} denotes whether concept b appears in video V. Intuitively, if b appears in more important videos of a, V rw (a, b) tends to be larger, and the range of V rw (a, b) is between 0 and 1. Then, the video reference distance (V rd) is defined as the difference of V rw between two concepts, as follows. V rd (a, b) = V rw (b, a) −V rw (a, b) (4) In practice, this feature may be too sparse if the MOOC corpus is small. For an arbitrary concept pair, they may have no co-occurrence in all course videos. We expend the video reference distance to a more general version by considering the semantic relatedness among concepts. Besides the conditions in which A refers to B, we also consider the cases in which A-related concepts refer to B. We first define the generalized video reference weight (GV rw) as follows. GV rw (a, b) = PM i=1 V rw (ai, b) · ω (ai, b) PM i=1 ω (ai, b) (5) where a1, · · · , aM ∈ K are the top-M most similar concepts of a, measured by the semantic relatedness function ω(·, ·) in feature 1. GV rw is the weighted average of V rw (ai, b), indicating how b is referred by a-related concepts in their corresponding videos. Note that a1 = a, thus GV rw (a, b) ≡V rw (a, b) when M = 1. Similarly, we define the generalized video reference distance (GV rd) as follows. GV rd (a, b) = GV rw (b, a) −GV rw (a, b) (6) Intuitively, if most of b-related concepts refer to a but not vice versa, then a is likely to be a prerequisite of b. For example, it is plausible for the related concepts of “gradient descent”, e.g., “steepest descent” and “Newton’s method”, to mention “matrix” but clearly not vice versa. Feature 3: Sentence Reference Distance Sentence reference distance is similar to feature 2, but stands on the sentence level. Following the same design pattern of feature 2, we define the sentence reference weight (Srw) and sentence reference distance (Srd) as follows. Srw (a, b) = P C∈D P V∈C P s∈V r(s, a) · r(s, b) P C∈D P V∈C P s∈V r(s, a) (7) Srd (a, b) = Srw (b, a) −Srw (a, b) (8) where r (s, a) ∈{0, 1} is an indicator of whether concept a appears in sentence s. Srw(a, b) calculates the ratio of B appearing in the sentences of a. We also define generalized sentence reference weight (GSrw) and generalized sentence reference distance (GSrd) as follows. GSrw (a, b) = PM i=1 Srw (ai, b) · ω (ai, b) PM i=1 ω (ai, b) (9) GSrd (a, b) = GSrw (b, a) −GSrw (a, b) (10) Feature 4: Wikipedia Reference Distance Contextual information of Wikipedia is also useful for detecting prerequisite relations. As mention before, RefD is not general enough to be applied in our settings, because it is limited to Wikipedia concepts. Therefore, we improve this indicator to a more general one, which is also suitable for non-wiki concepts. Specifically, for a concept a ∈ K, let us denote the top-M most related wiki entities of a as Ra = ⟨e1, · · · , eM⟩, where e1, · · · , eM ∈E. Because concepts in K and entities in E are jointly embedded in the same vector space in Section 3.1, we can easily obtain Ra with the semantic relatedness metric ω(·, ·) in Feature 1. We then define the wikipedia reference weight (Wrw) as follows. Wrw (a, b) = P e∈Ra Erw (e, b) · ω (e, a) P e∈Ra ω (e, a) (11) where Erw(e, a) is a binary indicator, in which Erw(e, a) = 1 if the Wikipedia article of e refers to any entity in Ra, and Erw(e, a) = 0 otherwise. Wrw (a, b) measures how frequently that arelated wiki entities refer to b-related wiki entities. Finally, wikipedia reference distance (Wrd) is 1450 defined as the difference of Wrw between a and b, i.e., Wrd (a, b) = Wrw (b, a) −Wrw (a, b). 3.3 Structural Features Since course concepts are usually introduced based on their learning dependencies, the structure of MOOC courses also significantly contribute to prerequisite relation inference in MOOCs. However, structure-based features for prerequisite detection have not been well-studied in previous works. In this section, we investigate different structural information, including appearing positions of concepts, learning dependencies of videos and complexity levels of concepts, to propose three novel features to infer prerequisite relations in MOOCs. Before introducing these features, let us define two useful notations as follows. C(a) are the courses in which a is a course concept, i.e., C(a) = {Ci|Ci ∈D, a ∈Ki}. I(C, a) are the video indexes that contain concept a in course C. For example, if a appears in the first and the 4-th video of C, then I(C, a) = {1, 4}. Feature 5: Average Position Distance In a course, for a specific concept, its prerequisite concepts tend to be introduced before this concept and its subsequent concepts tend to be introduced after this concept. Based on this observation, for a concept pair ⟨a, b⟩, we calculate the distance of the average appearing position of a and b as one feature, namely average position distance (Apd). If C(a) ∩C(b) ̸= ∅, Apd (a, b) is formally defined as follows. Apd (a, b) = P C∈C(a)∩C(b) P i∈I(C,a) i |I(C,a)| − P j∈I(C,b) j |I(C,b)| |C(a) ∩C(b)| (12) If C(a) ∩C(b) = ∅, we set Apd (a, b) = 0. Feature 6: Distributional Asymmetry Distance We also use the learning dependency of course videos to help infer learning dependency of course concepts. Based on our observation, the chance that a prerequisite concept is frequently mentioned in its subsequent videos is larger than that a subsequent concept is talked about in its prerequisite videos. Specifically, if video Va is a precursor video of Vb, and a is a prerequisite concept of b, then it is likely that f(b, Va) < f(a, Vb), where f(a, V) denotes the term frequency of a in video V. We thus define another feature, namely distributional asymmetry distance (Dad), to calculate the extent that a given concept pair satisfies this distributional asymmetry pattern. Formally, in course C, for a given concept pair ⟨a, b⟩, we first define S(C) = {(i, j)|i ∈I(C, a), j ∈ I(C, b), i < j}, i.e., all possible video pairs of ⟨a, b⟩that have sequential relation. Then, the distributional asymmetry distance of ⟨a, b⟩is formally defined as follows. Dad (a, b) = P C∈C(a)∩C(b) P (i,j)∈S(C) f(a,VC i )−f(b,VC j ) |S(C)| |C(a) ∩C(b)| (13) where VC i denotes the i-th video of course C. If C(a) ∩C(b) = ∅, we set Dad (a, b) = 0. Feature 7: Complexity Level Distance Two related concepts with prerequisite relationship tend to have a difference in their complexity level, meaning that one concept is basic while another one is advanced. For example, “data set” and “training set” have learning dependencies and the latter concept is more advanced than the former one. However, “test set” and “training set” have no such relation when their complexity levels are similar. Complexity level of a course concept is implicit in its distribution in courses. Specifically, we observe that, for a concept in MOOCs, if it covers more videos in a course or it survives longer time in a course, then it is more likely to be a basic concept rather than an advanced one. We then formally define the average video coverage (avc) and the average survival time (ast) of a concept a as follows. avc (a) = 1 |C(a)| X C∈C(a) |I(C, a)| |C| (14) ast (a) = 1 |C(a)| X C∈C(a) max(I(C, a)) −min(I(C, a)) + 1 |C| (15) where max/min(I(C, a)) obtains the video index where a appears the last/first time in course C. Based on the above equations, we define the complexity level distance (Cld) between concept a and b as follows. Cld (a, b) = avc (a) · ast (a) −avc (b) · ast (b) (16) 4 Experiments 4.1 Data Sets In order to validate the efficiency of our features, we conducted experiments on three MOOC corpus with different domains: “Machine Learning” (ML), “Data Structure and Algorithms” (DSA), and “Calculus” (CAL). To the best of our knowledge, there is no public data set for mining 1451 Dataset #courses #videos #concepts #pairs κ − + ML 5 548 244 5,676 1,735 0.63 DSA 8 449 201 3,877 1,148 0.65 CAL 7 359 128 1,411 621 0.59 Table 1: Dataset Statistics prerequisite relations in MOOCs. We created the experimental data sets through a three-stage process. First, for each chosen domain, we select its relevant courses from Coursera, one of the leading MOOC platforms, and download all course materials using coursera-dl 2, a widely-used tool for automatically downloading Coursera.org videos. For example, for ML, we select 5 related courses 3 from 5 different universities and obtain a total of 548 course videos. Then, we manually label course concepts for each course: (1) Extract candidate concepts from documents of video subtitles following the method of Parameswaran et al. (2010). (2) Label the candidates as “course concept” or “not course concept” and obtain a set of course concepts for this course. Finally, we manually annotate the prerequisite relations among the labeled course concepts. If the number of course concepts is n, the number of all possible pairs to be checked could reach n × (n −1)/2, which requires arduous human labeling work. Therefore, for each dataset, we randomly select 25 percent of all possible pairs for evaluation. For each course concept pair ⟨a, b⟩, three human annotators majoring in the corresponding domain were asked to label them as “a is b’s prerequisite”, “b is a’s prerequisite” or “no prerequisite relationship” using their own knowledge background and additional textbook resources. We take a majority vote of the annotators to create final labels and access the interannotator agreement using the average of pairwise κ statistics (Landis and Koch, 1981) between all pairs of the three annotators. The statistics of the three datasets are listed in Table 1, where #courses and #videos are the total number of courses and videos in each dataset and #concepts is the number of labeled course concepts. The #pairs denotes the number of labeled concept pairs for evaluation, in which ‘+’ 2https://github.com/coursera-dl/coursera-dl 3These courses are: “Machine Learning (Stanford)”, “Machine Learning (Washington)”, “Practical Machine Learning (JHU)”, “Machine Learning With Big Data (UCSD)” and “Neural Networks for Machine Learning (UofT)” Classifier ML DSA CAL M 1 10 1 10 1 10 SVM P 63.2 60.1 60.7 62.3 61.1 61.9 R 68.5 72.4 69.3 67.5 67.9 68.3 F1 65.8 65.7 64.7 64.8 64.3 64.9 NB P 58.0 58.2 62.9 62.6 60.1 60.6 R 58.1 60.5 62.3 61.8 61.2 62.1 F1 58.1 59.4 62.6 62.2 60.6 61.3 LR P 66.8 67.6 63.1 62.0 62.7 63.3 R 60.8 61.0 64.8 66.8 63.6 64.1 F1 63.7 64.2 63.9 64.3 61.6 62.9 RF P 68.1 71.4 69.1 72.7 67.3 70.3 R 70.0 73.8 68.4 72.3 67.8 71.9 F1 69.1 72.6 68.7 72.5 67.5 71.1 Table 2: Classification results of the proposed method(%). denotes the number of positive instances, i.e. pairs who have prerequisite relations, and ‘−’ denotes the number of negative instances. 4.2 Evaluation Results For each dataset, we apply 5-fold cross validation to evaluate the performance of the proposed method, i.e., testing our method on one fold while training the classifier using the other 4 folds. Usually, there are much fewer positive instances than negative instances, so we balance the training set by oversampling the positive instances (Yosef et al., 2011; Talukdar and Cohen, 2012). In our experiments, we employ 4 different binary classifiers, including Na¨ıveBayes (NB), Logistic Regression (LR), SVM with linear kernel (SVM) and Random Forest (RF). We use precision (P), recall (R), and F1-score (F1) to evaluate the prerequisite classification results. The experimental results are presented in Table 2. Contextual features are shaped by the parameter M, i.e., the number of related concepts being considered. In our experiments, we tried different settings of M and report the results when M=1 and M=10 in Table 2. As for the semantic representation, we use the latest publicly available Wikipedia dump 4 and apply the skip-gram model (Mikolov et al., 2013b) to train word embeddings using the Python library gensim 5 with default parameters. As shown in Table 2, the evaluation results varies by different classifiers. It turns out that Na¨ıveBayes performs the worst. This seems to be caused by the fact that the independence assumption is not satisfied for our features; for 4https://dumps.wikimedia.org/enwiki/20170120/ 5http://radimrehurek.com/gensim/ 1452 example, Feature 2 and Feature 3 both utilize the local context information, only with different granularity, thus are quite co-related. Random Forest beats others, with best F1 across all three datasets. Its average F1 outperforms SVM, NB and LR by 7.0%, 11.1% and 8.3%, respectively (M=10). The reason is as follows. Instead of a simple descriptive feature, each of our proposed feature determines whether a concept pair has prerequisite relation from a specific aspect; its function is similar to an independent weak classifier. Therefore, rather than using a linear combination of features for classification (e.g., SVM and LR), a boosting model (e.g., Random Forest) is more suitable for this task. The performance is slightly better when M=10 for all classifiers, with +0.20% for SVM, +0.53% for NB, +0.73% for LR and +3.63% for RF, with respect to the average F1. The results verify the effectiveness of considering related concepts in contextual features. We use RF and set M=10 in the following experiments. 4.3 Comparison with Baselines We further compare our approach with three representative methods for prerequisite inference. 4.3.1 Baseline Approaches Hyponym Pattern Method (HPM). Prerequisite relationships often exists between hyponymhypernym concept pairs (e.g., “Machine Learning” and “Supervised Learning”). As a baseline, we adopt the 10 lexico-syntactic patterns used by Wang et al. (2016) to extract hyponym relationships between concepts. If a concept pair matches at least one of these patterns in the MOOC corpus, we judge them to have prerequisite relations. Reference Distance (RD) We also employ the RefD proposed by Liang et al. (2015) as one of our baselines. However, this method is only appliable to Wikipedia concepts. To make it comparable with our method, for each of our datasets, we construct a subset of it by picking out the concept pairs ⟨a, b⟩in which a and b are both Wikipedia concepts. For example, we find 49% of course concepts in ML have their corresponding Wikipedia articles and 28% percent of concept pairs in ML meet the above condition. We use the new datasets constructed from ML, DSA and CAL, namely W-ML, W-DSA, and W-CAL, to compare our method with RefD. Supervised Relationship Identification (SRI) Wang et al. (2016) has employed several feaMethod ML DSA CAL WML WDSA WCAL HPM P 67.3 71.4 69.5 79.9 72.3 73.5 R 18.4 14.8 16.5 25.5 27.3 23.3 F1 29.0 24.5 26.7 38.6 39.6 35.4 RD P − − − 73.4 77.8 74.4 R − − − 42.8 44.8 43.1 F1 − − − 54.1 56.8 54.6 T-SRI P 61.4 62.3 62.5 58.1 60.1 62.7 R 62.9 64.6 65.5 67.6 65.3 67.9 F1 62.1 63.4 64.0 62.5 62.6 65.2 F-SRI P − − − 64.3 64.3 64.8 R − − − 62.1 65.6 65.2 F1 − − − 63.2 64.9 65.0 MOOC P 71.4 72.7 70.3 72.8 68.4 71.4 R 73.8 72.3 71.9 71.3 72.0 70.8 F1 72.6 72.5 71.1 72.0 70.2 71.1 Table 3: Comparison with baselines(%). tures to infer prerequisite relations of Wikipedia concepts in textbooks, including 3 Textbook features and 6 Wikipedia features. Based on these features, they performed a binary classification using SVM to identify prerequisite relationships and has achieved state-of-the-art results. Because the Wikipedia features can only be applied to Wikipedia concepts, in order to make a comparison, we create two versions of their method: (1) T-SRI: only textbook features are used to train the classifier and (2) F-SRI: the original version, all features are used. We compare the performance of our method with T-SRI on ML, DSA and CAL datasets; we also compare our method with F-SRI on W-ML, W-DSA and W-CAL datasets. 4.3.2 Performance Comparison In Table 3 we summarize the comparing results of different methods across different datasets (“MOOC” refers to our method). We find that our method outperforms baseline methods across all six datasets 6. For example, the F1 of our method on ML outperforms T-SRI and HPM by 10.5% and 43.6%, respectively. Specifically, we have the following observations. First, HPM achieves relatively high precision but low recall. This is because when A “is a” B, a prerequisite relation often exists from B to A, but clearly not vise versa. Second, T-SRI has certain effectiveness for learning prerequisite relations, with F1 ranging from 62.1 to 65.2%. However, T-SRI only considers relatively simple features, such as the sequential and co-occurrence among concepts. With more 6The improvements are all statistically significant tested with bootstrap re-sampling with 95% confidence. 1453 comprehensive feature engineering, the F1 of our method significantly outperforms T-SRI (+10.5% on ML, +9.1% on DSA and +7.1% on CAL). Third, incorporating Wikipedia-based features (FSRI) achieves certain promotion in performance (+0.93% comparing with T-SRI in average F1). 4.4 Feature Contribution Analysis In order to get an insight into the importance of each feature in our method, we perform a contribution analysis with different features. Here, we run our approach 10 times on the ML dataset. In each of the first 7 times, one feature is removed; in each of the rest 3 times, one group of features are removed, e.g., removing contextual features means removing Gvrd, Gsrd and Wrd at the same time. We record the decrease of F1-score for each setting. Table 4 lists the evaluation results after ignoring different features. According to the decrement of F1-scores, we find that all the proposed features are useful in predicting prerequisite relations. Especially, we observe that Cld (Feature 7), decreasing our best F1score by 7.4%, plays the most important role. This suggests that most concepts do exist difference in complexity level. For two concepts, the difference of their coverage and survival times in courses are important for prerequisite relation detection. On the contrary, with 1.9% decrease, Sr (Feature 1) is relatively less important. We may easily find two concepts which have related semantic meanings (e.g., “test set” and “training set”) but have no prerequisite relationship. However, semantic relatedness is critical for the contextual features because it overcomes the problem of the sparsity of context in calculation. We experience a decrease of 5.4% when we further do not consider related concepts in contextual features, i.e., set M=1. As for the feature group contribution, we observe that Structural Features, with a decrease of 9.2%, has a greater impact than the other two groups. This is as expected because it includes Cld. Among the three structural features, Apd makes relatively less contribution. The reason is that sometimes the professor may frequently mention a prerequisite concept after introducing a subsequent concept orally, for helping students better understand the concept. 5 Related Works To the best of our knowledge, there has been no previous work on mining prerequisite relations Ignored Feature(s) P R F1 Single Sr 69.6 72.9 71.2(-1.4) GVrd 68.8 71.4 70.1(-2.5) GSrd 67.9 71.4 69.6(-3.0) Wrd 70.1 72.1 71.1(-1.5) Apd 69.7 70.8 70.2(-2.4) Dad 69.2 69.5 69.4(-3.2) Cld 64.9 65.6 65.2(-7.4) Group Semantic 69.6 72.9 71.2(-1.4) Contextual 66.4 68.9 67.6(-5.0) Structural 63.7 64.2 63.4(-9.2) Table 4: Contribution analysis of different features(%). among concepts in MOOCs. Some researchers have been engaged in detecting other type of prerequisite relations. For example, Yang et al. (2015) proposed to induce prerequisite relations among courses to support curriculum planning. Liu et al. (2011) studied learning-dependency between knowledge units, a special text fragment containing concepts, using a classification-based method. In the area of education, researchers have tried to find general prerequisite structures from students’ test performance (Vuong et al., 2011; Scheines et al., 2014; Huang et al., 2015). Different from them, we focus on more finegrained prerequisite relations, i.e., the prerequisite relations among course concepts. Among the few related works of mining prerequisite relations among concepts, Liang et al. (2015) and Talukdar and Cohen (Talukdar and Cohen, 2012) studied prerequisite relationships between Wikipedia articles. They assumed that hyperlinks between Wikipedia pages indicate a prerequisite relationship and design several useful features. Based on these Wikipedia features plus some textbook features, Wang et al. (Wang et al., 2016) proposed a method to construct a concept map from textbooks, which jointly learns the key concepts and their prerequisite relations. However, the investigation of only Wikipedia concepts is also the bottleneck of their studies. In our work, we propose more general features to infer prerequisite relations among concepts, regardless of whether the concept is in Wikipedia or not. Liang et al. (2017) propose an optimization based framework to discover concept prerequisite relations from course dependencies. Gordon et al. (2016) utilize cross-entropy to learn concept dependencies in scientific corpus. Besides local statistical information, our method also utilize external knowledge to enrich concept semantics, which is more informativeness. 1454 Our work is also related to the study of automatic relation extraction. Different research lines have been proposed around this topic, including hypernym-hyponym relation extraction (Ritter et al., 2009; Wei et al., 2012), entity relation extraction (Zhou et al., 2006; Fan et al., 2014; Lin et al., 2015) and open relation extraction (Fader et al., 2011). However, previous works mainly focus on factual relations, the extraction of cognitive relations (e.g. prerequisite relations) has not been well studied yet. 6 Conclusions and Future Work We conducted a new investigation on automatically inferring prerequisite relations among concepts in MOOCs. We precisely define the problem and propose several useful features from different aspects, i.e., contextual, structural and semantic features. Moreover, we apply an embeddingbased method that jointly learns the semantic representations of Wikipedia concepts and MOOC concepts to help implement the features. Experimental results on online courses with different domains validate the effectiveness of the proposed method. Promising future directions would be to investigate how to utilize user interaction in MOOCs for better prerequisite learning, as well as how deep learning models can be used to automatically learn useful features to help infer prerequisite relations. Acknowledgments This work is supported by 973 Program (No. 2014CB340504), NSFC Key Program (No. 61533018), Fund of Online Education Research Center, Ministry of Education (No. 2016ZD102), Key Technologies Research and Development Program of China (No. 2014BAK04B03) and NSFC-NRF (No. 61661146007). References Benjamin Samuel Bloom. 1981. All our children learning: A primer for parents, teachers, and other educators. McGraw-Hill Companies. Yixin Cao, Juanzi Li, Xiaofei Guo, Shuanhu Bai, Heng Ji, and Jie Tang. 2015. Name list only? target entity disambiguation in short texts. In Proceedings of EMNLP. pages 654–664. Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proceedings of EMNLP. pages 1535– 1545. Miao Fan, Deli Zhao, Qiang Zhou, Zhiyuan Liu, Thomas Fang Zheng, and Edward Y. Chang. 2014. Distant supervision for relation extraction with matrix completion. In Proceedings of ACL. pages 839– 849. Paolo Ferragina and Ugo Scaiella. 2010. TAGME: on-the-fly annotation of short text fragments (by wikipedia entities). In Proceedings of CIKM. pages 1625–1628. Jonathan Gordon, Linhong Zhu, Aram Galstyan, Prem Natarajan, and Gully Burns. 2016. Modeling concept dependencies in a scientific corpus. In Proceedings of ACL. Xiaopeng Huang, Kyeong Yang, and Victor B. Lawrence. 2015. An efficient data mining approach to concept map generation for adaptive learning. In Proceedings of ICDM. pages 247–260. James Gregory Jardine. 2014. Automatically generating reading lists. Ph.D. thesis, University of Cambridge, UK. RJ Landis and GG Koch. 1981. The measurement of interrater agreement. Statistics methods for rates and proportions 2:212–236. Stephen Laurence and Eric Margolis. 1999. Concepts and cognitive science. Concepts: core readings pages 3–81. Chen Liang, Zhaohui Wu, Wenyi Huang, and C. Lee Giles. 2015. Measuring prerequisite relations among concepts. In Proceedings of EMNLP. pages 1668–1674. Chen Liang, Jianbo Ye, Zhaohui Wu, Bart Pursel, and C. Lee Giles. 2017. Recovering concept prerequisite relations from university course dependencies. In Proceedings of AAAI. pages 4786–4791. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In Proceedings of AAAI. pages 2181–2187. Jun Liu, Lu Jiang, Zhaohui Wu, Qinghua Zheng, and Ya-nan Qian. 2011. Mining learning-dependency between knowledge units from text. The VLDB Journal 20(3):335–345. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. International Journal of CoRR abs/1301.3781. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS. pages 3111– 3119. 1455 Joseph D. Novak. 1990. Concept mapping: A useful tool for science education. International Journal of Research in Science Teaching 27(10):937C949. Chitu Okoli, Mohamad Mehdi, Mostafa Mesgari, Finn ˚Arup Nielsen, and Arto Lanam¨aki. 2014. Wikipedia in the eyes of its beholders: A systematic review of scholarly research on wikipedia readers and readership. International Journal of the American Society for Information Science and Technology (JASIST) 65(12):2381–2403. Aditya G. Parameswaran, Hector Garcia-Molina, and Anand Rajaraman. 2010. Towards the web of concepts: Extracting concepts from large datasets. Proceedings of the VLDB Endowment (PVLDB) 3(1):566–577. Alan Ritter, Stephen Soderland, and Oren Etzioni. 2009. What is this, anyway: Automatic hypernym discovery. In Proceedings of AAAI. pages 88–93. Jean Michel Rouly, Huzefa Rangwala, and Aditya Johri. 2015. What are we teaching?: Automated evaluation of CS curricula content using topic modeling. In Proceedings of ICER. pages 189–197. Richard Scheines, Elizabeth Silver, and Ilya M. Goldin. 2014. Discovering prerequisite relationships among knowledge components. In Proceedings of EDM. pages 355–356. Nick J Schweitzer. 2008. Wikipedia and psychology: Coverage of concepts and its use by undergraduate students. International Journal of Teaching of Psychology 35(2):81–85. Partha Pratim Talukdar and William W Cohen. 2012. Crowdsourced comprehension: predicting prerequisite structure in wikipedia. In Proceedings of the Seventh Workshop on Building Educational Applications Using NLP. pages 307–315. Annalies Vuong, Tristan Nixon, and Brendon Towle. 2011. A method for finding prerequisites within a curriculum. In Proceedings of EDM. pages 211– 216. Shuting Wang, Alexander Ororbia, Zhaohui Wu, Kyle Williams, Chen Liang, Bart Pursel, and C Lee Giles. 2016. Using prerequisites to extract concept maps fromtextbooks. In Proceedings of CIKM. pages 317–326. Bifan Wei, Jun Liu, Jian Ma, Qinghua Zheng, Wei Zhang, and Boqin Feng. 2012. MOTIF-RE: motifbased hypernym/hyponym relation extraction from wikipedia links. In Proceedings of ICONIP. pages 610–619. Yiming Yang, Hanxiao Liu, Jaime G. Carbonell, and Wanli Ma. 2015. Concept graph learning from educational data. In Proceedings of WSDM. pages 159–168. Mohamed Amir Yosef, Johannes Hoffart, Ilaria Bordino, Marc Spaniol, and Gerhard Weikum. 2011. AIDA: an online tool for accurate disambiguation of named entities in text and tables. Proceedings of the VLDB Endowment (PVLDB) 4(12):1450–1453. Guodong Zhou, Jian Su, and Min Zhang. 2006. Modeling commonality among related classes in relation extraction. In Proceedings of ACL. 1456
2017
133
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1457–1469 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1134 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1457–1469 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1134 Unsupervised Text Segmentation Based on Native Language Characteristics Shervin Malmasi1,2 Mark Dras2 Mark Johnson2 Lan Du3 Magdalena Wolska4 1Harvard Medical School, Harvard University [email protected] 2Department of Computing, Macquarie University { shervin.malmasi, mark.dras, mark.johnson }@mq.edu.au 3Faculty of IT, Monash University [email protected] 4LEAD Graduate School, Universit¨at T¨ubingen [email protected] Abstract Most work on segmenting text does so on the basis of topic changes, but it can be of interest to segment by other, stylistically expressed characteristics such as change of authorship or native language. We propose a Bayesian unsupervised text segmentation approach to the latter. While baseline models achieve essentially random segmentation on our task, indicating its difficulty, a Bayesian model that incorporates appropriately compact language models and alternating asymmetric priors can achieve scores on the standard metrics around halfway to perfect segmentation. 1 Introduction Most work on automatically segmenting text has been on the basis of topic: segment boundaries correspond to topic changes (Hearst, 1997). There are various contexts, however, in which it is of interest to identify changes in other characteristics; for example, there has been work on identifying changes in authorship (Koppel et al., 2011) and poetic voice (Brooke et al., 2012). In this paper we investigate text segmentation on the basis of change in the native language of the writer. Two illustrative contexts where this task might be of interest are patchwriting detection and literary analysis. Patchwriting is the heavy use of text from a different source with some modification and insertion of additional words and sentences to form a new text. Pecorari (2003) notes that this is a kind of textual plagiarism, but is a strategy for learning to write in an appropriate language and style, rather than for deception. Keck (2006), Gilmore et al. (2010) and Vieyra et al. (2013) found that non-native speakers, not surprisingly in situations of imperfect mastery of a language, are strongly over-represented in this kind of textual plagiarism. In these cases the boundaries between the writer’s original text and (near-)copied native text are often quite apparent to the reader, as in this short example from Li and Casanave (2012) (copied text italicised): “Nevertheless, doubtfulness can be cleared reasonably by the experiments conducted upon the ‘split-brain patients’, in whom intra-hemispheric communication is no longer possible. To illustrate, one experiment has the patient sit at a table with a non-transparent screen blocking the objects behind, who is then asked to reach the objects with different hand respectively.” Because patchwriting can indicate imperfect comprehension of the source (Jamieson and Howard, 2013), identifying it and supporting novice writers to improve it has become a focus of programmes like the Citation Project.1 For the second, perhaps more speculative context of literary analysis, consider Joseph Conrad, known for having written a number of famous English-language novels, such as Heart of Darkness; he was born in Poland and moved to England at the age of 21. His writings have been the subject of much manual analysis, with one particular direction of such research being the identification of likely influences on his English writing, including his native Polish language and the French he learnt before English. Morzinski (1994), for instance, notes aspects of his writing that exhibit Polish-like syntax, verb inflection, or other linguistic characteristics (e.g. “Several had still their staves in their hands” where the awkwardly placed adverb still is typical of Polish). These appear both in isolated sentences and in larger chunks of text, and part of 1http://citationproject.net/ 1457 an analysis can involve identifying these chunks. This raises the question: Can NLP and computational models identify the points in a text where native language changes? Treating this as an unsupervised text segmentation problem, we present the first Bayesian model of text segmentation based on authorial characteristics, applied to native language. 2 Related Work Topic Segmentation The most widelyresearched text segmentation task has as its goal to divide a text into topically coherent segments. Lexical cohesion (Halliday and Hasan, 1976) is an important concept here: the principle that text is not formed by a random set of words and sentences but rather logically ordered sets of related words that together form a topic. In addition to the semantic relation between words, other methods such as back-references and conjunctions also help achieve cohesion. Based on this, Morris and Hirst (1991) proposed the use of lexical chains, sequences of related words (defined via thesaurus), to break up a text into topical segments: breaks in lexical chains indicate breaks in topic. The TextTiling algorithm (Hearst, 1994, 1997) took a related approach, defining a function over lexical frequency and distribution information to determine topic boundaries, and assuming that each topic has its own vocabulary and that large shifts in this vocabulary usage correspond to topic shifts. There have been many approaches since that time. A key one, which is the basis for our own work, is the unsupervised Bayesian technique BAYESSEG of Eisenstein and Barzilay (2008), based on a generative model that assumes that each segment has its own language model. Under this assumption the task can be framed as predicting boundaries at points which maximize the probability of a text being generated by a given language model. Their method is based on lexical cohesion — expressed in this context as topic segments having compact and consistent lexical distributions — and implements this within a probabilistic framework by modelling words within each segment as draws from a multinomial language model associated with that segment. Much other subsequent work either uses this as a baseline, or extends it in some way: Jeong and Titov (2010), for example, who propose a model for joint discourse segmentation and alignment for documents with parallel structures, such as a text with commentaries or presenting alternative views on the same topic; or Du et al. (2013), who use hierarchical topic structure to improve the linear segmentation. Bible Authorship Koppel et al. (2011) consider the task of decomposing a document into its authorial components based on their stylistic properties and propose an unsupervised method for doing so. The authors use as their data two biblical books, Jeremiah and Ezekiel, that are generally believed to be single-authored: their task was to segment a single artificial text constructed by interleaving chapters of the two books. Their most successful method used work in biblical scholarship on lexical choice: they give as an example the case that in Hebrew there are seven synonyms for the word fear, and that different authors may choose consistently from among them. Then, having constructed their own synsets using available biblical resources and annotations, they represent texts by vectors of synonyms and apply a modified cosine similarity measure to compare and cluster these vectors. While the general task is relevant to this paper, the particular notion of synonymy here means the approach is specific to this problem, although their approach is extended to other kinds of text in Akiva and Koppel (2013). Aldebei et al. (2015) proposed a new approach motivated by this work, similarly clustering sentences, then using a Naive Bayes classifier with modified prior probabilities to classify sentences. Poetry Voice Detection Brooke et al. (2012) perform stylistic segmentation of a well-known poem, The Waste Land by T.S. Eliot. This poem is renowned for the great number of voices that appear throughout the text and has been the subject of much literary analysis (Bedient and Eliot, 1986; Cooper, 1987). These distinct voices, conceived of as representing different characters, have differing tones, lexis and grammatical styles (e.g. reflecting the level of formality). The transitions between the voices are not explicitly marked in the poem and the task here is to predict the breaks where these voice changes occur. The authors argue that the use of generative models is not feasible for this task, noting: “Generative models, which use a bag-of-words assumption, have a very different problem: in their standard form, they can capture only lexical cohesion, which is not the (primary) focus of stylistic analysis.” 1458 They instead present a method based on a curve that captures stylistic change, similar to the TextTiling approach but generalised to use a range of features. The local maxima in this change curve represent potential breaks in the text. The features are both internal to the poem (e.g. word length, syllable count, POS tag) as well as external (e.g. average unigram counts in the 1T Corpus or sentiment polarity from a lexicon). Results on an artificially constructed mixed-style poem achieve a Pk of 0.25. Brooke et al. (2013) extend this by considering clustering following an initial segmentation. Native Language Identification (NLI) NLI casts the detecting of native language (L1) influence in writing in a non-native (L2) language as a classification task: the framing of the task in this way comes from Koppel et al. (2005). There has been much activity on it in the last few years, with Tetreault et al. (2012) providing a comprehensive analysis of features that had been used up until that point, and a shared task in 2013 (Tetreault et al., 2013) that attracted 29 entrants. The shared task introduced a new, now-standard dataset, TOEFL11, and work has continued on improving classification results, e.g. by Bykh and Meurers (2014) and Ionescu et al. (2014). In addition to work on the classification task itself, there have also been investigations of the features used, and how they might be employed elsewhere. Malmasi and Cahill (2015) examine the effectiveness of individual feature types used in the shared task and the diversity of those features. Of relevance to the present paper, simple part-of-speech n-grams alone are fairly effective, with classification accuracies of between about 40% and 65%; higher-order n-grams are more effective than lower, and the more finegrained CLAWS2 tagset more effective than the Penn Treebank tagset. An area for application of these features is in Second Language Acquisition (SLA), as a data-driven approach to finding L1-related characteristics that might be a result of cross-linguistic influence and consequently a possible starting for an SLA hypothesis (Ellis, 2008); Swanson and Charniak (2013) and Malmasi and Dras (2014) propose methods for this. Tying It Together Contra Brooke et al. (2012), we show that it is possible to develop effective generative models for segmentation on stylistic factors, of the sort used for topic segmentation. To apply it specifically to segmentation based on a writer’s L1, we draw on work in NLI. 3 Experimental Setup We investigate the task of L1-based segmentation in three stages: 1. Can we define any models that do better than random, in a best case scenario? For this best case scenario, we determine results over a devset with the best prior found by a grid search, for a single language pair likely to be relatively easily distinguishable. Note that as this is unsupervised segmentation, it is a devset in the sense that it is used to find the best prior, and also in a sense that some models as described in §4 use information from a related NLI task on the underlying data. 2. If the above is true, do the results also hold for test data, using priors derived from the devset? 3. Further, do the results also hold for all language pairs available in our dataset, not just a single easily distinguishable pair? We first describe the evaluation data — artificial texts generated from learner essays, similar to the artificially constructed texts of previously described work on Bible authorship and poetry segmentation — and evaluation metrics, followed in §4 by the definitions of our Bayesian models. 3.1 Source Data We use as the source of data the TOEFL11 dataset used for the NLI shared task (Blanchard et al., 2013) noted in §2. The data consists of 12100 essays by writers with 11 different L1s, taken from TOEFL tests where the test-taker is given a prompt2 as the topic for the essay. The corpus is balanced across L1s and prompts (which allows us to verify that segmentation isn’t occurring on the basis of topic), and is split into standard training, dev and test sets. 3.2 Document Generation As the main task is to segment texts by the author’s L1, we want to ensure that we are not segmenting by topic and thus use texts written by authors from different L1 backgrounds on the same topic (prompt). We will also create one dataset to verify that segmentation by topic works in this domain; for this we use texts written by authors from the same L1 background on different topics. For our L1-varying datasets, we construct composite documents to be segmented as alternat2For example, prompt P7 is: “Do you agree or disagree with the following statement? It is more important for students to understand ideas and concepts than it is for them to learn facts. Use reasons and examples to support your answer.” 1459 ing segments drawn from TOEFL11 from two different L1s holding the topic (prompt) constant, broadly following a standard approach (Brooke et al., 2012, for example) (see Appendix A.1 for details). We follow the same process for our topicvarying datasets, but hold the L1 constant while alternating the topic (prompt). For our single pair of L1s, we choose German and Italian. German is the class with the highest NLI accuracy in the TOEFL11 corpus across the shared task results and Italian also performs very well. Additionally, there is very little confusion between the two; a binary NLI classifier we trained on the language pair achieved 97% accuracy. For our all-pairs results, given the 11 languages in the TOEFL11 corpus, we have 55 sets of documents of alternating L1s (one of which is German–Italian). We generate four distinct types of datasets for our experiments using the above methodology. The documents in these datasets, as described below, differ in the parameters used to select the essays for each segment and what type of tokens are used. Tokens (words) can be represented in their original form and used for performing segmentation. Alternatively, using an insight from Wong et al. (2012), we can represent the documents at a level other than lexical: the text could consist of the POS tags corresponding to all of the tokens, or n-grams over those POS tags. The POS representation is motivated by the usefulness of POSbased features for capturing L1-based stylistic differences as noted in §2. Our method for encoding n-grams is described in Appendix A.2. TOPICSEG-TOKENS This data is generated by keeping the L1 class constant and alternating segments between two topics. We chose Italian for the L1 class and essays from the prompts “P7” and “P8” are used. The dataset, constructed from TOEFL11-TRAIN and TOEFL11-DEV, contains a total of 53 artificial documents, and will be used to verify that topic segmentation as discussed in Eisenstein and Barzilay (2008) functions as expected for data from this domain: that is, that topic change is detectable. TOPICSEG-PTB Here the tokens in each text are replaced with their POS tags or n-grams over those tags, and the segmentation is performed over this data. In this dataset the tags are obtained via the Stanford Tagger and use the Penn Treebank (PTB) tagset. The same source data (TOEFL11TRAIN and TOEFL11-DEV), L1 and topics as TOPICSEG-TOKENS are used for a total of 53 documents. This dataset will be used to investigate, inter alia, whether segmentation over these stylistically related features could take advantage of topic cues. We would expect not. L1SEG-PTB This dataset is used for segmentation based on native language, also using (n-grams over) the PTB POS tags. We choose a specific topic and then retrieve all essays from the corpus that match this; here we chose prompt “P7”, since it had the largest number of essays for our chosen single L1 pair, German–Italian. For the dataset constructed from TOEFL11-TRAIN and TOEFL11DEV (which we will refer to as L1SEG-PTB-GIDEV), this resulted in 57 documents. Documents that are composites of two L1s are then generated as described above. For investigating questions 2 and 3 above, we similarly have datasets constructed from the the smaller TOEFL11-TEST data (L1SEG-PTB-GI-TEST), which consist of 5 documents of 5 segments each for the single L1 pair, and from all language pairs (L1SEG-PTB-ALLDEV, L1SEG-PTB-ALL-TEST). We would expect that these datasets should not be segmentable by topic, as all the segments are on the same topic; the segments should however, differ in stylistic characteristics related to the L1. L1SEG-CLAWS2 This dataset is generated using the same methodology as L1SEG-PTB, with the exception that the essays are tagged using the RASP tagger which uses the more fine-grained CLAWS2 tagset, noting that the CLAWS2 tagset performed better in the NLI classification task (Malmasi and Cahill, 2015). 3.3 Evaluation We use the standard Pk (Beeferman et al., 1999) and WindowDiff (WD) (Pevzner and Hearst, 2002) metrics, which (broadly speaking) select sentences using a moving window of size k and determines whether these sentences correctly or incorrectly fall into the same or different reference segmentations. Pk and WD scores range between 0 and 1, with a lower score indicating better performance, and 0 a perfect segmentation. It has been noted that some “degenerate” algorithms — such as placing boundaries randomly or at every possible position — can score 0.5 (Pevzner and Hearst, 2002). WD scores are typically similar to Pk, correcting for differential penalties between false positive boundaries and false negatives implicit in Pk. Pk and WD scores reported in §5 are 1460 averages across all documents in a dataset. Formal definitions are given in Appendix A.3. 4 Segmentation Models For all of our segmentation we use as a starting point the unsupervised Bayesian method of Eisenstein and Barzilay (2008); see §2.3 We recap the important technical definitions here. In Equation 1 of their work they define the observation likelihood as, p(X | z, Θ) = T Y t p(xt | θzt), (1) where X is the set of all T sentences, z is the vector of segment assignments for each sentence, xt is the bag of words drawn from the language model and Θ is the set of all K language models Θ1 . . . ΘK. As is standard in segmentation work, K is assumed to be fixed and known (Malioutov and Barzilay, 2006); it is set to the actual number of segments. The authors also impose an additional constraint, that zt must be equal to either zt−1 (the previous sentence’s segment) or zt−1 +1 (the next segment), in order to ensure a linear segmentation. This segmentation model has two parameters: the set of language models Θ and the segment assignment indexes z. The authors note that since this task is only concerned with the segment assignments, searching in the space of language models is not desirable. They offer two alternatives to overcome this: (1) taking point estimates of the language models, which is considered to be theoretically unsatisfying and (2) marginalizing them out, which yields better performance. Equation 7 of Eisenstein and Barzilay (2008), reproduced here, shows how they marginalize over the language models, supposing that each language model is drawn from a symmetric Dirichlet prior (i.e. θj ∼Dir(θ0)): p(X | z, θ0) = K Y j pdcm({xt : zt = j} | θ0) (2) The Dirichlet compound multinomial distribution pdcm expresses the expectation over all the multinomial language models, when conditioned on the symmetric Dirichlet prior θ0: 3An open-source implementation of the method, called BAYESSEG, is made available by the authors at http:// groups.csail.mit.edu/rbg/code/bayesseg/ pdcm({xt : zt = j} | θ0) = Γ(Wθ0) Γ(Nj + Wθo) W Y i Γ(nj,i + θ0) Γ(θ0) (3) where W is the number of words in the vocabulary and Nj = PW i nj,i, the total number of words in the segment j. They then observe that the optimal segmentation maximizes the joint probability p(X, z | θ0) = p(X | z, θ0)p(z) and assume a uniform p(z) over valid segmentations with no probability mass assigned to invalid segmentations. The hyperparameter θ0 can be chosen, or can be learned via an ExpectationMaximization process. Inference Eisenstein and Barzilay (2008) defined two methods of inference, a dynamic programming (DP) one and one using MetropolisHastings (MH). Only MH is applicable where shifting a boundary will affect the probability of every segment, not just adjacent segments, as in their model incorporating cue phrases. Where this is not the case, they use DP inference. Their DP inference algorithm is suitable for all of our models, so we also use that. Priors For our priors, we carry out a grid search on the devsets (that is, the datasets derived from TOEFL11-TRAIN and TOEFL11-DEV) in the interval [0.1, 3.0], partitioned into 30 evenly spaced values; this includes both weak and strong priors.4 4.1 TOPICSEG Our first model is exactly the one proposed by Eisenstein and Barzilay (2008) described above. The aim here is to look at how we perform at segmenting learner essays by topic in order to confirm that topic segmentation works for this domain and these types of topics. We apply this model to the TOPICSEG-TOKENS and TOPICSEG-PTB datasets where the texts have the same L1 and boundaries are placed between essays of differing topics (prompts). 4.2 L1SEG Our second model modifies that of Eisenstein and Barzilay (2008) by revising the generative story. 4The Eisenstein and Barzilay (2008) code does implement an EM method for finding priors in the symmetric case, but we found that perhaps surprisingly the grid search almost always found better ones. 1461 Where they assume a standard generative model over words with constraints on topic change between sentences, we make minor modifications to adapt the model for our task. The standard generative story (Blei, 2012) — an account of how a model generates the observed data — usually generates words in a two-stage process: (1) For each document, randomly choose a distribution of topics. (2) For each word in the document: (a) Assign a topic from those chosen in step 1. (b) Randomly choose a word from that topic’s vocabulary. Here we modify this story to be over part-ofspeech data instead of lexical items. By using this representation (which as noted in §2 is useful for NLI classification) we aim to segment our texts based on the L1 of the author for each segment. For this model we only make use of the L1SEGPTB-GI-DEV dataset.5 4.3 L1SEG-COMP It is not obvious that the same properties that produce compact distributions in standard lexical chains would also be the case for POS data, particularly if extended to POS n-grams which can result in a very large number of potential tokens. In this regard Eisenstein and Barzilay (2008) note: “To obtain a high likelihood, the language models associated with each segment should concentrate their probability mass on a compact subset of words. Language models that spread their probability mass over a broad set of words will induce a lower likelihood. This is consistent with the principle of lexical cohesion.” Eisenstein and Barzilay (2008) discuss this within the context of topic segmentation.6 However, it is unclear if this would also would happen for POS tags; there is no syntactic analogue for the sort of lexical chains important in topic segmentation. It may then turn out that using all POS tags or n-grams over them as in the previous model would not achieve a strong performance. We thus use knowledge from the NLI classification task to help. Discarding Non-Discriminative Features One approach that could possibly overcome these lim5We also looked at including words. The results of these models were always worse, and we do not discuss them in this paper. 6For example, a topic segment related to the previously mentioned essay prompt P7 might concentrate its probability mass on the following set of words: {education, learning, understanding, fact, theory, idea, concept, knowledge}. itations is the removal of features from the input space that have been found to be nondiscriminative in NLI classification. This would allow us to encode POS sequence information via n-grams while also keeping the model’s vocabulary sufficiently small. Doing this requires the use of extrinsic information for filtering the n-grams. The use of such extrinsic information has proven to be useful for other similar tasks such as the poetry style change segmentation work of Brooke et al. (2012), as noted in §2. We perform this filtering using the discriminative feature lists derived from the NLI classification task using the system and method described in Malmasi and Dras (2014), also noted in §2. We extract the top 300 most discriminative POS ngram features for each L1 from TOEFL11-TRAIN and TOEFL11-DEV, resulting in two lists of 600 POS bigrams and trigrams; these are thus independent of our test datasets. (We illustrate a text with respect to these discriminative features in Appendix A.4.) Note that discriminative n-grams can overlap with each other within the same class and also between two classes. We resolve such conflicts by using the weights of the features from the classification task as described in Malmasi and Dras (2014) and choosing the feature with the higher weight. 4.4 L1SEG-ASYMP Looking at the distribution of discriminative features in our documents, one idea is that incorporating knowledge about which features are associated with which L1 could potentially help improve the results. One approach to do this is the use of asymmetric priors. We note that features associated with an L1 often dominate in a segment. Accordingly, priors can represent evidence external to the data that some some aspect should be weighted more strongly: for us, this is evidence from the NLI classification task. The segmentation models discussed so far only make use of a symmetric prior but later work mentions that it would be possible to modify this to use an asymmetric prior (Eisenstein, 2009). Given that priors are effective for incorporating external information, recent work has highlighted the importance of optimizing over such priors, and in particular, the use of asymmetric priors. Key work on this is by Wallach et al. (2009) on LDA, who report that “an asymmetric Dirichlet prior over the document-topic distributions has substantial advantages over a symmetric prior”, 1462 with prior values being determined through hyperparameter optimization. Such methods have since been applied in other tasks such as sentiment analysis (Lin and He, 2009; Lin et al., 2012) to achieve substantial improvements. For sentiment analysis, Lin and He (2009) incorporate external information from a subjectivity lexicon. In applying LDA, instead of using a uniform Dirichlet prior for the document–sentiment distribution, they use asymmetric priors for positive and negative sentiment, determined empirically. For our task, we assign a prior to each of two languages in a document, one corresponding to L1a and the other to L1b. Given this, we can assume that segments will alternate between L1a and L1b. And instead of a single θ0, we have two asymmetric priors that we call θa, θb corresponding to L1a and L1b respectively. This will require reworking the definition of pdcm in Equation 3. First adapting Equation 2, p(X | z, θa, θb) = Y {jo} pdcm({xt : zt = jo} | θa) · Y {je} pdcm({xt : zt = je} | θb), (4) with {jo} = {j | j mod 2 = 1, 1 ≤j ≤K} the set of indices over odd segments and {je} = {j | j mod 2 = 0, 1 ≤j ≤K} the set over evens. K is the (usual) total number of segments. Then pdcm({xt : zt = jo} | θa) = Γ(PW k θa[k]) Γ(Njo + PW k θa[k]) W Y i Γ(nj,i + θa[i]) Γ(θa[i]) (5) W is now more generally the number of items in our vocabulary (whether words or POS n-grams). A notational addition here is θa[k] which refers to the L1a prior for the kth word or POS n-gram. There is an analogous pdcm for θb. The next issue is how to construct the θa and θb. The simplest scenario would require a single constant value for all elements in one L1 and another for all elements in the other L1. Specifically, using discrim(L1x) to denote “the ranked list of discriminative n-grams for L1x”, we define θa[i] = ( c1 if θa[i] ∈discrim(L1a) c2 if θa[i] ∈discrim(L1b) and analogously for θb[i]. We would expect that c1 > c2 (i.e. the prior is stronger for elements that come from the appropriate ranked list of discriminative features), but these values will be learned. In principle we would calculate versions of p(X | z, θa, θb) twice: once where we assign θa to segment 1, and the second time where we assign θb. We’d then compare the two p(X | z, θa, θb), and see which one fits better. In this work, however, we will fix the initial L1: segment 1 corresponds to L1a and consequently has prior θa.7 5 Results 5.1 Segmenting by Topic We begin by testing the TOPICSEG model to ensure that the Bayesian segmentation methodology can achieve reasonable results for segmenting learner essays by topic. The results on the TOPICSEG-TOKENS dataset (Table 1) show that content words are very effective at segmenting the writings by topic, achieving Pk values in the range 0.19–0.21. These values are similar to those reported for segmenting Wall Street Journal text (Beeferman et al., 1999). On the other hand, using the PTB POS tag version of the data in the TOPICSEG-PTB dataset results in very poor segmentation results, with Pk values around 0.45. This is essentially the same as the performance of degenerate algorithms (noted in §3.3) of 0.5. This demonstrates that, as expected, POS unigrams do not provide enough information for topic segmentation; it is not possible to construct even an approximation to lexical chains using them. 5.2 L1-based Segmentation Having verified that the Bayesian segmentation approach is effective for topic segmentation on this data, we now turn to the L1SEG model for segmenting by the native language. From the results in Table 1 we see very poor performance with a Pk value of 0.466 for segmenting the texts in L1SEG-PTB-GI-DEV using the unigrams as is. This was a somewhat unexpected result given than we know POS unigram distributions are able to capture differences between L1groups (Malmasi and Cahill, 2015), albeit with limited accuracy. Moreover, neither bigram nor trigram encodings, which perform better at NLI, resulted in any improvement in our results. 7This requires an extension of the BAYESSEG software to support asymmetric priors. We will make this extended version of the code available under the same conditions as BAYESSEG. Please contact the first or second author for this. 1463 Model Dataset Prior(s) Pk WD TOPICSEG TOPICSEG-TOKENS 0.1 0.203 0.205 TOPICSEG TOPICSEG-PTB 0.8 0.444 0.480 L1SEG L1SEG-PTB-GI-DEV unigrams 0.1 0.466 0.489 L1SEG L1SEG-PTB-GI-DEV bigrams 0.8 0.466 0.487 L1SEG L1SEG-PTB-GI-DEV trigrams 0.8 0.480 0.489 L1SEG-COMP L1SEG-PTB-GI-DEV bigrams 0.1 0.476 0.490 L1SEG-COMP L1SEG-PTB-GI-DEV trigrams 0.4 0.393 0.398 L1SEG-COMP L1SEG-CLAWS2-GI-DEV bigrams 0.4 0.387 0.400 L1SEG-COMP L1SEG-CLAWS2-GI-DEV trigrams 0.4 0.370 0.373 L1SEG-ASYMP L1SEG-CLAWS2-GI-DEV trigrams (0.6,0.3) 0.316 0.318 Table 1: Results on devsets for single L1 pair (German–Italian). Model Pk WD L1SEG-COMP 0.358 0.360 L1SEG-ASYMP 0.266 0.271 Table 2: Results on testset L1SEG-CLAWS2-GITEST for single L1 pair (German–Italian). Priors are the ones from the corresponding devsets in Table 1. Model Pk WD L1SEG-COMP 0.365 (0.014) 0.369 (0.019) L1SEG-ASYMP 0.299 (0.022) 0.312 (0.027) L1SEG-COMP 0.376 (0.032) 0.381 (0.033) L1SEG-ASYMP 0.314 (0.043) 0.319 (0.045) Table 3: Results on dev and test datasets (upper: L1SEG-CLAWS2-ALL-DEV, lower: L1SEGCLAWS2-ALL-TEST): means and standard deviations (in parentheses) across datasets for all 55 L1 pairs. 5.3 Incorporating Discriminative Features Filtering the bigrams results in some minor improvements over the best results from the L1SEG model. However, there are substantial improvements when using the filtered POS trigrams, with a Pk value of 0.393. We did not test unigrams as they were the weakest NLI feature of the three. This improvement is, we believe, because the Bayesian modelling of lexical cohesion over the input tokens requires that each segment concentrates its probability mass on a compact subset of words. In the context of the n-gram tokenization method tested in the previous section, the L1SEG model with n-grams would most likely exacerbate the issue by substantially increasing the number of tokens in the language model: while the unigrams do not capture enough information to distinguish non-lexical shifts, the n-grams provide too many features. We also see that using the CLAWS2 tagset outperforms the PTB tagset. The results achieved for bigrams are much higher, while the trigram results are also better, with Pk = 0.370. NLI experiments using different POS tagsets have established that more fine-grained tagsets (i.e. those with more tag categories) provide greater classification accuracy when used as n-gram features for classification.8 Results here comport with the previous findings. As one of the two best models, we run it on the held-out test data, using the best priors found from the grid search on the devset data (Table 2); we find the Pk and WD values are comparable (and in fact slightly better), so the model still works if the filtering uses discriminative NLI features from the devset. Looking at results across all 55 L1 pairs (Table 3), we also see similar mean Pk and WD values with only a small standard deviation, indicating the approach works just as well across all language pairs. Priors here are all also weak, in the range [0.1, 0.9]. In sum, the results here demonstrate the importance of inducing a compact distribution, which we did here by reducing the vocabulary size by stripping non-informative features. 5.4 Applying Two Asymmetric Priors Our final model, L1SEG-ASYMP, assesses whether setting different priors for each L1 can improve performance. Our grid search over two priors gives 900 possible prior combinations. These combinations also include cases where θa and θb are symmetric, which is equivalent to the L1SEG-COMP model. We observe (Table 1) that 8In §2 we noted the comparison of PTB and CLAWS2 tagsets in Malmasi and Cahill (2015); also, Gyawali et al. (2013) compared Penn Treebank and Universal POS tagsets and found that the more fine-grained PTB ones did better. 1464 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3.0 L1 B 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3.0 L1 A Prior Grid Search Results (Coarse) 0.330 0.345 0.360 0.375 0.390 0.405 0.420 0.435 0.450 Figure 1: Heatmap over asymmetric priors on L1SEG-CLAWS2-ALL-DEV the prior pair of (0.6, 0.3) achieves a Pk value of 0.321, a substantial improvement over the previous best result of 0.370. Inspecting priors (see Figure 1 for a heatmap over priors) shows that the best results are in the region of weak priors for both values, which is consistent with the emphasis on compactness since weak priors result in more compact models (noted by e.g. Wang and Blei (2009)). Moreover, they are away from the diagonal, i.e. the L1SEG-COMP model will not produce the best results. A more fine-grained grid search, focusing on the range that provided the best results in the coarse search, can improve the results further still: over the interval [0.3, 0.9], partitioned into 60 evenly spaced values, finds a prior pair of (0.64, 0.32) that provides a slight improvement of the Pk value to 0.316. As with L1SEG-COMP, we also evaluate this on the same held-out test set (Table 2). Applying the best asymmetric prior from the devset grid search, this improves to 0.266. Again, results across all 55 L1 pairs (Table 3) show the same pattern, and much as for L1SEG-COMP, priors are all weak or neutral (range [0.1, 1.0]). These results thus demonstrate that setting an asymmetric prior gives the best performance on this task. 6 Conclusion and Future Work Applying the approach to our two illustrative applications of §1, patchwriting and literary analysis, would require development of relevant corpora. In both cases the distinction would be between native writing and writing that shows characteristics of a non-native speaker, rather than between two nonnative L1s. There isn’t yet a topic-balanced corpus like TOEFL11 which includes native speaker writing for evaluation, although we expect (given recent results on distinguishing native from nonnative text in Malmasi and Dras (2015)) that the techniques should carry over. For the literary analysis, as well, to bridge the gap between work like Morzinski (1994) and a computational application, it remains to be seen how precise an annotation is possible for this task. Additionally, the granularity of segmentation may need to be finer than sentence-level, as suggested by the examples in §1; this level of granularity hasn’t previously been tackled in unsupervised segmentation. In terms of possible developments for the models presented for the task here, previous NLI work has shown that other, syntactic features can be useful for capturing L1-based differences. The incorporation of these features for this segmentation task could be a potentially fruitful avenue for future work. We have taken a fairly straightforward approach which modifies the generative story. A more sophisticated approach would be to incorporate features into the unsupervised model. One such example is the work of Berg-Kirkpatrick et al. (2010) which demonstrates that each component multinomial of a generative model can be turned into a miniature logistic regression model with the use of a modified EM algorithm. Their results showed that the feature-enhanced unsupervised models which incorporate linguisticallymotivated features achieve substantial improvements for tasks such as POS induction and word segmentation. We note also that the models are potentially applicable to other stylistic segmentation tasks beyond L1 influence. As far as this initial work is concerned we have shown that, framed as a segmentation task, it is possible to identify units of text that differ stylistically in their L1 influence. We demonstrated that it is possible to define a generative story and associated Bayesian models for stylistic segmentation, and further that segmentation results improve substantially by compacting the n-gram distributions, achieved by incorporating knowledge about discriminative features extracted from NLI models. Our best results come from a model that uses alternating asymmetric priors for each L1, with the priors selected using a grid search and then evaluated on a held-out test set. 1465 Acknowledgements The authors thank John Pate for very helpful discussions in the early stages of the paper, and the three anonymous referees for useful suggestions. A Details on Dataset Generation and Evaluation A.1 Document Generation For our L1-varying datasets, we construct composite documents to be segmented as alternating segments drawn from TOEFL11 from two different L1s. Broadly following a standard approach (Brooke et al., 2012, for example), to generate such a document, we randomly draw TOEFL11 essays — each of which constitutes a segment — from the appropriate L1s and concatenate them, alternating the L1 class after each segment. This is repeated until the maximum number of segments per document, s, is reached. We generate multiple composite documents until all TOEFL11 have been used. In this work we use datasets generated with s = 5.9 We follow the same process for our topic-varying datasets, but hold the L1 constant while alternating the topic (prompt). A.2 Encoding n-gram information Lau et al. (2013) investigated the importance of n-grams within topic models over lexical items. They note that in topic modelling each token receives a topic label and that the words in a collocation — e.g. stock market, White House or health care — may receive different topic assignments despite forming a single semantic unit. They found that identifying collocations (via a t-test) and preprocessing the text to turn these into single tokens provides a notable improvement over a unigram bag of words. We implement a similar preprocessing step that converts each sentence within each document to a set of bigrams or trigrams using a sliding window, where each n-gram is represented by a single token. So, for example, the trigram DT JJ NN becomes a single token: DT-JJ-NN. A.3 Evaluation: Metric Definitions Given two segmentations r (reference) and h (hypothesis) for a corpus of N sentences, 9This is the average number of segments per chapter in the written text used by Eisenstein and Barzilay (2008). However, we have also successfully replicated our results using s = 7, 9. □ □ □ □ □ □ □ ■ ■ ■ ■ □ □ □ □ □ □ □ □ ■ ■ ■ □ □ □ □ ■ ■ ■ ■ □ ■ ■ ■ □ □ □ ■ ■ ■ □ □ □ □ □ □ ■ ■ ■ □ □ □ □ □ □ ■ ■ ■ □ □ □ □ ■ ■ ■ □ □ □ □ □ □ □ ■ ■ ■ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ ■ ■ ■ □ □ ■ ■ ■ □ □ □ ■ ■ ■ □ □ □ □ □ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ □ □ □ □ ■ ■ ■ □ □ ■ ■ ■ □ □ □ □ □ □ □ □ □ □ ■ ■ ■ □ □ □ □ □ □ □ □ ■ ■ ■ □ □ □ □ □ □ □ □ □ □ ■ ■ ■ □ □ □ ■ ■ ■ □ ■ ■ ■ ■ ■ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ ■ ■ ■ ■ ■ ■ □ □ □ ■ ■ ■ □ □ □ □ □ □ □ □ □ ■ ■ ■ □ □ ■ ■ ■ □ ■ ■ ■ ■ □ □ □ □ □ □ ■ ■ ■ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ ■ ■ ■ □ □ □ □ ■ ■ ■ ■ □ ■ ■ ■ □ □ □ □ □ □ □ ■ ■ ■ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ ■ ■ ■ □ □ □ □ □ □ □ ■ ■ ■ □ □ □ □ □ □ □ □ □ □ □ □ □ □ ■ ■ ■ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ ■ ■ ■ □ □ □ □ □ □ □ □ ■ ■ ■ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ ■ ■ ■ □ □ □ □ □ ■ ■ ■ ■ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ ■ ■ ■ □ □ □ □ □ □ □ ■ ■ ■ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 Figure 2: A visualization of sentences from a single segment. Each row represents a sentence and each token is represented by a square. Token trigrams considered discriminative for either of our two L1 classes are shown in blue or red, with the rest being considered non-discriminative. PD(r, h) = X 1≤i≤j≤N D(i, j)(δr(i, j) ¯⊕δh(i, j)) (6) where δr(i, j) is an indicator function specifying whether i and j lie in the same reference segment, δh(i, j) similarly for a hypothesised segment, ¯⊕is the XNOR function, and D is a distance probability distribution over the set of possible distances between sentences. For Pk, this D is defined by a fixed window of size k which contains all the probability mass, and k is set to be half the average reference segment length. The WD definition is: WD(r, h) = 1 N −k N−k X i=1 (|b(ri, ri+k) −b(hi, hi+k)| > 0) (7) where b(ri, rj) represents the number of boundaries between positions i and j in the reference text (similarly, the hypothesis text). A.4 Visualisation of Discriminative Features Figure 2 shows a visualization of the discriminative features of a single segment where each row represents a sentence and each token is represented by a square. Tokens that are part of a trigram which is considered discriminative for either of our two L1 classes are shown in blue or red. Note that discriminative n-grams can overlap with each other within the same class (e.g. on lines 1 and 2 where two overlapping trigrams form a group of four consecutive tokens) and also between two classes (e.g. on lines 10 and 11). 1466 References Navot Akiva and Moshe Koppel. 2013. A Generic Unsupervised Method for Decomposing Multi-Author Documents. Journal of the American Society for Information Science and Technology (JASIST) 64(11):2256–2264. Khaled Aldebei, Xiangjian He, and Jie Yang. 2015. Unsupervised decomposition of a multi-author document based on naive-bayesian model. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). Association for Computational Linguistics, Beijing, China, pages 501–505. http://www.aclweb.org/anthology/P152082. Calvin Bedient and Thomas Stearns Eliot. 1986. He Do the Police in Different Voices: The Waste Land and its protagonist. University of Chicago Press. Doug Beeferman, Adam Berger, and John Lafferty. 1999. Statistical Models for Text Segmentation. Machine Learning 34(1-3):177–210. https://doi.org/10.1023/A:1007506220214. Taylor Berg-Kirkpatrick, Alexandre Bouchard-Cˆot´e, John DeNero, and Dan Klein. 2010. Painless unsupervised learning with features. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, Los Angeles, California, pages 582–590. http://www.aclweb.org/anthology/N10-1083. Daniel Blanchard, Joel Tetreault, Derrick Higgins, Aoife Cahill, and Martin Chodorow. 2013. TOEFL11: A Corpus of Non-Native English. Technical report, Educational Testing Service. David M. Blei. 2012. Probabilistic topic models. Communications of the ACM 55(4):77–84. Julian Brooke, Adam Hammond, and Graeme Hirst. 2012. Unsupervised Stylistic Segmentation of Poetry with Change Curves and Extrinsic Features. In Proceedings of the NAACLHLT 2012 Workshop on Computational Linguistics for Literature. Association for Computational Linguistics, Montr´eal, Canada, pages 26–35. http://www.aclweb.org/anthology/W12-2504. Julian Brooke, Graeme Hirst, and Adam Hammond. 2013. Clustering voices in the waste land. In Proceedings of the Workshop on Computational Linguistics for Literature. Association for Computational Linguistics, Atlanta, Georgia, pages 41–46. http://www.aclweb.org/anthology/W13-1406. Serhiy Bykh and Detmar Meurers. 2014. Exploring Syntactic Features for Native Language Identification: A Variationist Perspective on Feature Encoding and Ensemble Optimization. Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers pages 1962–1973. John Xiros Cooper. 1987. TS Eliot and the politics of voice: The argument of The Waste Land. 79. UMI Research Press. Lan Du, Wray Buntine, and Mark Johnson. 2013. Topic segmentation with a structured topic model. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Atlanta, Georgia, pages 190–200. http://www.aclweb.org/anthology/N13-1019. Jacob Eisenstein. 2009. Hierarchical text segmentation from multi-scale lexical cohesion. In Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL). Boulder, CO, pages 353–361. www.aclweb.org/anthology/N091040. Jacob Eisenstein and Regina Barzilay. 2008. Bayesian unsupervised topic segmentation. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Honolulu, Hawaii, pages 334– 343. http://www.aclweb.org/anthology/D08-1035. Rod Ellis. 2008. The Study of Second Language Acquisition, 2nd edition. Oxford University Press, Oxford, UK. Joanna Gilmore, Denise Strickland, Briana Timmerman, Michelle Maher, and David Feldon. 2010. Weeds in the flower garden: An exploration of plagiarism in graduate students research proposals and its connection to enculturation, ESL, and contextual factors. International Journal for Educational Integrity 6(1):13–28. Binod Gyawali, Gabriela Ramirez, and Thamar Solorio. 2013. Native Language Identification: a Simple n-gram Based Approach. In Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications. Association for Computational Linguistics, Atlanta, Georgia, pages 224–231. http://www.aclweb.org/anthology/W131729. M. A. K. Halliday and Ruqaiya Hasan. 1976. Cohesion in English. Longman Publishing Group. Marti A. Hearst. 1994. Multi-paragraph segmentation of expository text. In Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Las Cruces, New Mexico, USA, pages 9–16. https://doi.org/10.3115/981732.981734. Marti A. Hearst. 1997. Texttiling: Segmenting text into multi-paragraph subtopic passages. Computational Lingustics 23(1):33–64. http://www.aclweb.org/anthology/J97-1003. 1467 Radu Tudor Ionescu, Marius Popescu, and Aoife Cahill. 2014. Can characters reveal your native language? A language-independent approach to native language identification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Doha, Qatar, pages 1363– 1373. http://www.aclweb.org/anthology/D14-1142. Sandra Jamieson and Rebecca Moore Howard. 2013. Sentence-Mining: Uncovering the Amount Of Reading and Reading Comprehension In College Writers’ Researched Writing. In Randall McClure and James P. Purdy, editors, The New Digital Scholar: Exploring and Enriching the Research and Writing Practices of NextGen Students, American Society for Information Science and Technology, Medford, NJ, pages 111–133. Minwoo Jeong and Ivan Titov. 2010. Unsupervised discourse segmentation of documents with inherently parallel structure. In Proceedings of the ACL 2010 Conference Short Papers. Association for Computational Linguistics, Uppsala, Sweden, pages 151–155. http://www.aclweb.org/anthology/P102028. Casey Keck. 2006. The use of paraphrase in summary writing: A comparison of L1 and L2 writers. Journal of Second Language Writing 15(4):261–278. Moshe Koppel, Navot Akiva, Idan Dershowitz, and Nachum Dershowitz. 2011. Unsupervised decomposition of a document into authorial components. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Portland, Oregon, USA, pages 1356–1364. http://www.aclweb.org/anthology/P111136. Moshe Koppel, Jonathan Schler, and Kfir Zigdon. 2005. Automatically determining an anonymous author’s native language. In Intelligence and Security Informatics. Springer-Verlag, volume 3495 of LNCS, pages 209–217. Jey Han Lau, Timothy Baldwin, and David Newman. 2013. On Collocations and Topic Models. ACM Transactions on Speech and Language Processing (TSLP) 10(3). Yongyan Li and Christine Pearson Casanave. 2012. Two first-year students strategies for writing from sources: Patchwriting or plagiarism? Journal of Second Language Writing 21:165–180. Chenghua Lin and Yulan He. 2009. Joint sentiment/topic model for sentiment analysis. In Proceedings of the 18th ACM Conference on Information and Knowledge Management. ACM, New York, NY, USA, pages 375–384. http://doi.acm.org/10.1145/1645953.1646003. Chenghua Lin, Yulan He, Richard Everson, and Stefan R¨uger. 2012. Weakly supervised joint sentimenttopic detection from text. Knowledge and Data Engineering, IEEE Transactions on 24(6):1134–1145. Igor Malioutov and Regina Barzilay. 2006. Minimum cut model for spoken lecture segmentation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Sydney, Australia, pages 25–32. https://doi.org/10.3115/1220175.1220179. Shervin Malmasi and Aoife Cahill. 2015. Measuring feature diversity in native language identification. In Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications. Association for Computational Linguistics, Denver, Colorado, pages 49–55. http://www.aclweb.org/anthology/W15-0606. Shervin Malmasi and Mark Dras. 2014. Language Transfer Hypotheses with Linear SVM Weights. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Doha, Qatar, pages 1385–1390. http://aclweb.org/anthology/D14-1144. Shervin Malmasi and Mark Dras. 2015. Multilingual Native Language Identification. Natural Language Engineering https://doi.org/10.1017/S1351324915000406. Jane Morris and Graeme Hirst. 1991. Lexical cohesion computed by thesaural relations as an indicator of the structure of text. Computational linguistics 17(1):21–48. Mary Morzinski. 1994. The Linguistic influence of Polish on Joseph Conrad’s style. Columbia University Press, New York, NY. Diane Pecorari. 2003. Good and original: Plagiarism and patchwriting in academic second-language writing. Journal of Second Language Writing 12(4):317–345. Lev Pevzner and Marti A. Hearst. 2002. A critique and improvement of an evaluation metric for text segmentation. Computational Linguistics 28(1):19–36. https://doi.org/10.1162/089120102317341756. Ben Swanson and Eugene Charniak. 2013. Extracting the Native Language Signal for Second Language Acquisition. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Atlanta, Georgia, pages 85–94. http://www.aclweb.org/anthology/N13-1009. Joel Tetreault, Daniel Blanchard, and Aoife Cahill. 2013. A report on the first native language identification shared task. In Proceedings of the Eighth 1468 Workshop on Innovative Use of NLP for Building Educational Applications. Association for Computational Linguistics, Atlanta, Georgia, pages 48–57. http://www.aclweb.org/anthology/W13-1706. Joel Tetreault, Daniel Blanchard, Aoife Cahill, and Martin Chodorow. 2012. Native tongues, lost and found: Resources and empirical evaluations in native language identification. In Proceedings of COLING 2012. The COLING 2012 Organizing Committee, Mumbai, India, pages 2585–2602. http://www.aclweb.org/anthology/C12-1158. Michelle Vieyra, Denise Strickland, and Brianna Timmerman. 2013. Patterns in plagiarism and patchwriting in science and engineering graduate students’ research proposals. International Journal for Educational Integrity 9(1):35–49. Hanna M. Wallach, David M. Mimno, and Andrew McCallum. 2009. Rethinking lda: Why priors matter. In Y. Bengio, D. Schuurmans, J.D. Lafferty, C.K.I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, Curran Associates, Inc., pages 1973–1981. Chong Wang and David M Blei. 2009. Decoupling sparsity and smoothness in the discrete hierarchical dirichlet process. In Advances in Neural Information Processing Systems. pages 1982–1989. Sze-Meng Jojo Wong, Mark Dras, and Mark Johnson. 2012. Exploring Adaptor Grammars for Native Language Identification. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguistics, Jeju Island, Korea, pages 699– 709. http://www.aclweb.org/anthology/D12-1064. 1469
2017
134
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1470–1480 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1135 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1470–1480 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1135 Weakly Supervised Cross-Lingual Named Entity Recognition via Effective Annotation and Representation Projection Jian Ni and Georgiana Dinu and Radu Florian IBM T. J. Watson Research Center 1101 Kitchawan Road, Yorktown Heights, NY 10598, USA {nij, gdinu, raduf}@us.ibm.com Abstract The state-of-the-art named entity recognition (NER) systems are supervised machine learning models that require large amounts of manually annotated data to achieve high accuracy. However, annotating NER data by human is expensive and time-consuming, and can be quite difficult for a new language. In this paper, we present two weakly supervised approaches for cross-lingual NER with no human annotation in a target language. The first approach is to create automatically labeled NER data for a target language via annotation projection on comparable corpora, where we develop a heuristic scheme that effectively selects goodquality projection-labeled data from noisy data. The second approach is to project distributed representations of words (word embeddings) from a target language to a source language, so that the sourcelanguage NER system can be applied to the target language without re-training. We also design two co-decoding schemes that effectively combine the outputs of the two projection-based approaches. We evaluate the performance of the proposed approaches on both in-house and open NER data for several target languages. The results show that the combined systems outperform three other weakly supervised approaches on the CoNLL data. 1 Introduction Named entity recognition (NER) is a fundamental information extraction task that automatically detects named entities in text and classifies them into pre-defined entity types such as PERSON, ORGANIZATION, GPE (GeoPolitical Entities), EVENT, LOCATION, TIME, DATE, etc. NER provides essential inputs for many information extraction applications, including relation extraction, entity linking, question answering and text mining. Building fast and accurate NER systems is a crucial step towards enabling large-scale automated information extraction and knowledge discovery on the huge volumes of electronic documents existing today. The state-of-the-art NER systems are supervised machine learning models (Nadeau and Sekine, 2007), including maximum entropy Markov models (MEMMs) (McCallum et al., 2000), conditional random fields (CRFs) (Lafferty et al., 2001) and neural networks (Collobert et al., 2011; Lample et al., 2016). To achieve high accuracy, a NER system needs to be trained with a large amount of manually annotated data, and is often supplied with language-specific resources (e.g., gazetteers, word clusters, etc.). Annotating NER data by human is rather expensive and timeconsuming, and can be quite difficult for a new language. This creates a big challenge in building NER systems of multiple languages for supporting multilingual information extraction applications. The difficulty of acquiring supervised annotation raises the following question: given a welltrained NER system in a source language (e.g., English), how can one go about extending it to a new language with decent performance and no human annotation in the target language? There are mainly two types of approaches for building weakly supervised cross-lingual NER systems. The first type of approaches create weakly labeled NER training data in a target language. One way to create weakly labeled data is through annotation projection on aligned parallel corpora or translations between a source language and a target language, e.g., (Yarowsky et al., 2001; Zitouni and Florian, 2008; Ehrmann et al., 2011). Another way is to utilize the text and structure of 1470 Wikipedia to generate weakly labeled multilingual training annotations, e.g., (Richman and Schone, 2008; Nothman et al., 2013; Al-Rfou et al., 2015). The second type of approaches are based on direct model transfer, e.g., (T¨ackstr¨om et al., 2012; Tsai et al., 2016). The basic idea is to train a single NER system in the source language with language independent features, so the system can be applied to other languages using those universal features. In this paper, we make the following contributions to weakly supervised cross-lingual NER with no human annotation in the target languages. First, for the annotation projection approach, we develop a heuristic, language-independent data selection scheme that seeks to select good-quality projection-labeled NER data from comparable corpora. Experimental results show that the data selection scheme can significantly improve the accuracy of the target-language NER system when the alignment quality is low and the projectionlabeled data are noisy. Second, we propose a new approach for direct NER model transfer based on representation projection. It projects word representations in vector space (word embeddings) from a target language to a source language, to create a universal representation of the words in different languages. Under this approach, the NER system trained for the source language can be directly applied to the target language without the need for re-training. Finally, we design two co-decoding schemes that combine the outputs (views) of the two projection-based systems to produce an output that is more accurate than the outputs of individual systems. We evaluate the performance of the proposed approaches on both in-house and open NER data sets for a number of target languages. The results show that the combined systems outperform the state-of-the-art cross-lingual NER approaches proposed in T¨ackstr¨om et al. (2012), Nothman et al. (2013) and Tsai et al. (2016) on the CoNLL NER test data (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003). We organize the paper as follows. In Section 2 we introduce three NER models that are used in the paper. In Section 3 we present an annotation projection approach with effective data selection. In Section 4 we propose a representation projection approach for direct NER model transfer. In Section 5 we describe two co-decoding schemes that effectively combine the outputs of two projection-based approaches. In Section 6 we evaluate the performance of the proposed approaches. We describe related work in Section 7 and conclude the paper in Section 8. 2 NER Models The NER task can be formulated as a sequence labeling problem: given a sequence of words x1, ..., xn, we want to infer the NER tag li for each word xi, 1 ≤i ≤n. In this section we introduce three NER models that are used in the paper. 2.1 CRFs and MEMMs Conditional random fields (CRFs) are a class of discriminative probabilistic graphical models that provide powerful tools for labeling sequential data (Lafferty et al., 2001). CRFs learn a conditional probability model pλ(l|x) from a set of labeled training data, where x = (x1, ..., xn) is a random sequence of input words, l = (l1, ..., ln) is the sequence of label variables (NER tags) for x, and l has certain Markov properties conditioned on x. Specifically, a general-order CRF with order o assumes that label variable li is dependent on a fixed number o of previous label variables li−1, ..., li−o, with the following conditional distribution: pλ(l|x) = e Pn i=1 PK k=1 λkfk(li,li−1,...,li−o,x) Zλ(x) (1) where fk’s are feature functions, λk’s are weights of the feature functions (parameters to learn), and Zλ(x) is a normalization constant. When o = 1, we have a first-order CRF which is also known as a linear-chain CRF. Given a set of labeled training data D = (x(j), l(j))j=1,...,N, we seek to find an optimal set of parameters λ∗that maximize the conditional log-likelihood of the data: λ∗= arg max λ N X j=1 log pλ(l(j)|x(j)) (2) Once we obtain λ∗, we can use the trained model pλ∗(l|x) to decode the most likely label sequence l∗for any new input sequence of words x (via the Viterbi algorithm for example): l∗= arg max l pλ∗(l|x) (3) A related conditional probability model, called maximum entropy Markov model (MEMM) (McCallum et al., 2000), assumes that l is a Markov 1471 chain conditioned on x: pλ(l|x) = n Y i=1 pλ(li|li−1, ..., li−o, x) = n Y i=1 e PK k=1 λkfk(li,li−1,...,li−o,x) Zλ(li−1, ..., li−o, x) (4) The main difference between CRFs and MEMMs is that CRFs normalize the conditional distribution over the whole sequence as in (1), while MEMMs normalize the conditional distribution per token as in (4). As a result, CRFs can better handle the label bias problem (Lafferty et al., 2001). This benefit, however, comes at a price. The training time of order-o CRFs grows exponentially (O(Mo+1)) with the number of output labels M, which is typically slow even for moderate-size training data if M is large. In contrast, the training time of order-o MEMMs is linear (O(M)) with respect to M independent of o, so it can handle larger training data with higher order of dependency. We have implemented both a linear-chain CRF model and a general-order MEMM model. 2.2 Neural Networks With the increasing popularity of distributed (vector) representations of words, neural network models have recently been applied to tackle many NLP tasks including NER (Collobert et al., 2011; Lample et al., 2016). We have implemented a feedforward neural network model which maximizes the log-likelihood of the training data similar to that of (Collobert et al., 2011). We adopt a locally normalized model (the conditional distribution is normalized per token as in MEMMs) and introduce context dependency by conditioning on the previously assigned tags. We use a target word and its surrounding context as features. We do not use other common features such as gazetteers or character-level representations as such features might not be readily available or might not transfer to other languages. We have deployed two neural network architectures. The first one (called NN1) uses the word embedding of a word as the input. The second one (called NN2) adds a smoothing prototype layer that computes the cosine similarity between a word embedding and a fixed set of prototype vectors (learned during training) and returns a weighted average of these prototype vectors as the input. In our experiments we find that with the Figure 1: Architecture of the two neural network models: left-NN1, right-NN2. smoothing layer, NN2 tends to have a more balanced precision and recall than NN1. Both networks have one hidden layer, with sigmoid and softmax activation functions on the hidden and output layers respectively. The two neural network models are depicted in Figure 1. 3 Annotation Projection Approach The existing annotation projection approaches require parallel corpora or translations between a source language and a target language with alignment information. In this paper, we develop a heuristic, language-independent data selection scheme that seeks to select good-quality projection-labeled data from noisy comparable corpora. We use English as the source language. Suppose we have comparable1 sentence pairs (X, Y) between English and a target language, where X includes N English sentences x(1), ..., x(N), Y includes N target-language sentences y(1), ..., y(N), and y(j) is aligned to x(j) via an alignment model, 1 ≤j ≤N. We use a sentence pair (x, y) as an example to illustrate how the annotation projection procedure works, where x = (x1, x2, ..., xs) is an English sentence, and y = (y1, y2, ..., yt) is a target-language sentence that is aligned to x. Annotation Projection Procedure 1. Apply the English NER system on the English sentence x to generate the NER tags l = (l1, l2, ..., ls) for x. 2. Project the NER tags to the target-language sentence y using the alignment information. Specifically, if a sequence of English words (xi, ..., xi+p) is aligned to a sequence of target-language words (yj, ..., yj+q), and (xi, ..., xi+p) is recognized (by the English NER system) as an entity with NER tag l, 1Ideally, the sentences would be translations of each other, but we only require possibly parallel sentences. 1472 then (yj, ..., yj+q) is labeled with l2. Let l′ = (l′ 1, l′ 2, ..., l′ t) be the projected NER tags for the target-language sentence y. We can apply the annotation projection procedure on all the sentence pairs (X, Y), to generate projected NER tags L′ for the target-language sentences Y. (Y, L′) are automatically labeled NER data with no human annotation in the target language. One can use those projection-labeled data to train an NER system in the target language. The quality of such weakly labeled NER data, and consequently the accuracy of the target-language NER system, depend on both 1) the accuracy of the English NER system, and 2) the alignment accuracy of the sentence pairs. Since we don’t require actual translations, but only comparable data, the downside is that if some of the data are not actually parallel and if we use all for weakly supervised learning, the accuracy of the target-language NER system might be adversely affected. We are therefore motivated to design effective data selection schemes that can select good-quality projection-labeled data from noisy data, to improve the accuracy of the annotation projection approach for cross-lingual NER. 3.1 Data Selection Scheme We first design a metric to measure the annotation quality of a projection-labeled sentence in the target language. We construct a frequency table T which includes all the entities in the projectionlabeled target-language sentences. For each entity e, T also includes the projected NER tags for e and the relative frequency (empirical probability) ˆP(l|e) that entity e is labeled with tag l. Table 1 shows a snapshot of the frequency table where the target language is Portuguese. We use ˆP(l|e) to measure the reliability of labeling entity e with tag l in the target language. The intuition is that if an entity e is labeled by a tag l with higher frequency than other tags in the projection-labeled data, it is more likely that the annotation is correct. For example, if the joint accuracy of the source NER system and alignment system is greater than 0.5, then the correct tag of a random entity will have a higher relative frequency than other tags in a large enough sample. Based on the frequency scores, we calculate the quality score of a projection-labeled target2If the IOB (Inside, Outside, Beginning) tagging format is used, then (yj, yj+1, ..., yj+q) is labeled with (B-l, I-l,...,I-l). Entity Name NER Tag Frequency Estados Unidos GPE 0.853 Estados Unidos ORGANIZATION 0.143 Estados Unidos PEOPLE 0.001 Estados Unidos PRODUCT 0.001 Estados Unidos TITLEWORK 0.001 Estados Unidos EVENT 0.001 Table 1: A snapshot of the frequency table where the target language is Portuguese. Estados Unidos means United States. The correct NER tag for Estados Unidos is GPE which has the highest relative frequency in the weakly labeled data. language sentence y by averaging the frequency scores of the projected entities in the sentence: q(y) = Σe∈y ˆP(l′(e)|e) n(y) (5) where l′(e) is the projected NER tag for e, and n(y) is the total number of entities in sentence y. We use q(y) to measure the annotation quality of sentence y, and n(y) to measure the amount of annotation information contained in sentence y. We design a heuristic data selection scheme which selects projection-labeled sentences in the target language that satisfy the following condition: q(y) ≥q; n(y) ≥n (6) where q is a quality score threshold and n is an entity number threshold. We can tune the two parameters to make tradeoffs among the annotation quality of the selected sentences, the annotation information contained in the selected sentences, and the total number of sentence selected. One way to select the threshold parameters q and n is via a development set - either a small set of human-annotated data or a sample of the projection-labeled data. We select the threshold parameters via coordinate search using the development set: we first fix n = 3 and search the best ˆq in [0, 0.9] with a step size of 0.1; we then fix q = ˆq and select the best ˆn in [1, 5] with a step size of 1. 3.2 Accuracy Improvements We evaluate the effectiveness of the data selection scheme via experiments on 4 target languages: Japanese, Korean, German and Portuguese. We use comparable corpora between English and each target language (ranging from 2M to 6M tokens) with alignment information. For each target language, we also have a set of manually annotated NER data (ranging from 30K to 45K tokens) 1473 Language (q, n) Training Size F1 Score Japanese (0, 0) 4.9M 41.2 (0.7, 4) 1.3M 53.4 Korean (0, 0) 4.5M 25.0 (0.4, 2) 1.5M 38.7 German (0, 0) 5.2M 67.2 (0.4, 4) 2.6M 67.5 Portuguese (0, 0) 2.1M 61.5 (0.1, 4) 1.5M 62.7 Table 2: Performance comparison of weakly supervised NER systems trained without data selection ((q, n) = (0, 0)) and with data selection ((ˆq, ˆn) determined by coordinate search). which are served as the test data for evaluating the target-language NER system. The source (English) NER system is a linearchain CRF model which achieves an accuracy of 88.9 F1 score on an independent NER test set. The alignment systems between English and the target languages are maximum entropy models (Ittycheriah and Roukos, 2005), with an accuracy of 69.4/62.0/76.1/88.0 F1 score on independent Japanese/Korean/German/Portuguese alignment test sets. For each target language, we randomly select 5% of the projection-labeled data as the development set and the remaining 95% as the training set. We compare an NER system trained with all the projection-labeled training data with no data selection (i.e., (q, n) = (0, 0)) and an NER system trained with projection-labeled data selected by the data selection scheme where the development set is used to select the threshold parameters q and n via coordinate search. Both NER systems are 2nd-order MEMM models3 which use the same template of features. The results are shown in Table 2. For different target languages, we use the same source (English) NER system for annotation projection, so the differences in the accuracy improvements are mainly due to the alignment quality of the comparable corpora between English and different target languages. When the alignment quality is low (e.g., as for Japanese and Korean) and hence the projection-labeled NER data are quite noisy, the proposed data selection scheme is very effective in selecting good-quality projection-labeled data and the improvement is big: +12.2 F1 score for 3In our experiments, CRFs cannot handle training data with a few million words, since our NER system has over 50 entity types, and the training time of CRFs grows at least quadratically in the number of entity types. Japanese and +13.7 F1 score for Korean. Using a stratified shuffling test (Noreen, 1989), for a significance level of 0.05, data-selection is statistically significantly better than no-selection for Japanese, Korean and Portuguese. 4 Representation Projection Approach In this paper, we propose a new approach for direct NER model transfer based on representation projection. Under this approach, we train a single English NER system that uses only word embeddings as input representations. We create mapping functions which can map words in any language into English and we simply use the English NER system to decode. In particular, by mapping all languages into English, we are using one universal NER system and we do not need to re-train the system when a new language is added. 4.1 Monolingual Word Embeddings We first build vector representations of words (word embeddings) for a language using monolingual data. We use a variant of the Continuous Bag-of-Words (CBOW) word2vec model (Mikolov et al., 2013a), which concatenates the context words surrounding a target word instead of adding them (similarly to (Ling et al., 2015)). Additionally, we employ weights w = 1 dist(x,xc) that decay with the distance of a context word xc to a target word x. Tests on word similarity benchmarks show this variant leads to small improvements over the standard CBOW model. We train 300-dimensional word embeddings for English. Following (Mikolov et al., 2013b), we use larger dimensional embeddings for the target languages, namely 800. We train word2vec for 1 epoch for English/Spanish and 5 epochs for the rest of the languages for which we have less data. 4.2 Cross-Lingual Representation Projection We learn cross-lingual word embedding mappings, similarly to (Mikolov et al., 2013b). For a target language f, we first extract a small training dictionary from a phrase table that includes word-to-word alignments between English and the target language f. The dictionary contains English and target-language word pairs with weights: (xi, yi, wi)i=1,...,n, where xi is an English word, yi is a target-language word, and the weight wi = ˆP(xi|yi) is the relative frequency of xi given yi as extracted from the phrase table. 1474 Suppose we have monolingual word embeddings for English and the target language f. Let ui ∈Rd1 be the vector representation for English word xi, vi ∈Rd2 be the vector representation for target-language word yi. We find a linear mapping Mf→e by solving the following weighted least squares problem where the dictionary is used as the training data: Mf→e = arg min M n X i=1 wi||ui −Mvi||2 (7) In (7) we generalize the formulation in (Mikolov et al., 2013b) by adding frequency weights to the word pairs, so that more frequent pairs are of higher importance. Using Mf→e, for any new word in f with vector representation v, we can project it into the English vector space as the vector Mf→ev. The training dictionary plays a key role in finding an effective cross-lingual embedding mapping. To control the size of the dictionary, we only include word pairs with a minimum frequency threshold. We set the threshold to obtain approximately 5K to 6K unique word pairs for a target language, as our experiments show that larger-size dictionaries might harm the performance of representation projection for direct NER model transfer. 4.3 Direct NER Model Transfer The source (English) NER system is a neural network model (with architecture NN1 or NN2) that uses only word embedding features (embeddings of a word and its surrounding context) in the English vector space. Model transfer is achieved simply by projecting the target language word embeddings into the English vector space and decoding these using the English NER system. More specifically, given the word embeddings of a sequence of words in a target language f, (v1, ..., vt), we project them into the English vector space by applying the linear mapping Mf→e: (Mf→ev1, ..., Mf→evt). The English NER system is then applied on the projected input to produce NER tags. Words not in the target-language vocabulary are projected into their English embeddings if they are found in the English vocabulary, or into an NER-trained UNK vector otherwise. 5 Co-Decoding Given two weakly supervised NER systems which are trained with different data using different models (MEMM model for annotation projection and neural network model for representation projection), we would like to design a co-decoding scheme that can combine the outputs (views) of the two systems to produce an output that is more accurate than the outputs of individual systems. Since both systems are statistical models and can produce confidence scores (probabilities), a natural co-decoding scheme is to compare the confidence scores of the NER tags generated by the two systems and select the tags with higher confidences scores. However, confidence scores of two weakly supervised systems may not be directly comparable, especially when comparing O tags with non-O tags (i.e., entity tags). We consider an exclude-O confidence-based co-decoding scheme which we find to be more effective empirically. It is similar to the pure confidence-based scheme, with the only difference that it always prefers a non-O tag of one system to an O tag of the other system, regardless of their confidence scores. In our experiments we find that the annotation projection system tends to have a high precision and low recall, i.e., it detects fewer entities, but for the detected entities the accuracy is high. The representation projection system tends to have a more balanced precision and recall. Based on this observation, we develop the following rank-based co-decoding scheme that gives higher priority to the high-precision annotation projection system: 1. The combined output includes all the entities detected by the annotation projection system. 2. It then adds all the entities detected by the representation projection system that do not conflict4 with entities detected by the annotation projection system (to improve recall). Note that an entity X detected by the representation projection system does not conflict with the annotation projection system if the annotation projection system produces O tags for the entire span of X. For example, suppose the output tag sequence of annotation projection is (B-PER,O,O,O,O), of representation projection is (B-ORG,I-ORG,O,B-LOC,I-LOC), then the combined output under the rank-based scheme will be (B-PER,O,O,B-LOC,I-LOC). 4Two entities detected by two different systems conflict with each other if either 1) the two entities have different spans but overlap with each other; or 2) the two entities have the same span but with different NER tags. 1475 Japanese P R F1 Annotation-Projection (AP) 69.9 43.2 53.4 Representation-Projection (NN1) 71.5 36.6 48.4 Representation-Projection (NN2) 59.9 42.4 49.7 Co-Decoding (Conf): AP+NN1 65.7 49.5 56.5 Co-Decoding (Rank): AP+NN1 68.3 51.6 58.8 Co-Decoding (Conf): AP+NN2 59.5 53.3 56.2 Co-Decoding (Rank): AP+NN2 61.6 54.5 57.8 Supervised (272K) 84.5 80.9 82.7 Korean P R F1 Annotation-Projection (AP) 69.5 26.8 38.7 Representation-Projection (NN1) 66.1 23.2 34.4 Representation-Projection (NN2) 68.5 43.4 53.1 Co-Decoding (Conf): AP+NN1 68.2 41.0 51.2 Co-Decoding (Rank): AP+NN1 71.3 42.8 53.5 Co-Decoding (Conf): AP+NN2 68.9 53.4 60.2 Co-Decoding (Rank): AP+NN2 70.0 53.3 60.5 Supervised (97K) 88.2 74.0 80.4 German P R F1 Annotation-Projection (AP) 76.5 60.5 67.5 Representation-Projection (NN1) 69.0 48.8 57.2 Representation-Projection (NN2) 63.7 66.1 64.9 Co-Decoding (Conf): AP+NN1 68.5 61.7 64.9 Co-Decoding (Rank): AP+NN1 72.7 65.0 68.6 Co-Decoding (Conf): AP+NN2 64.7 71.3 67.9 Co-Decoding (Rank): AP+NN2 67.1 72.6 69.7 Supervised (125K) 77.8 68.1 72.6 Portuguese P R F1 Annotation-Projection (AP) 84.0 50.1 62.7 Representation-Projection (NN1) 70.5 47.6 56.8 Representation-Projection (NN2) 66.0 63.4 64.7 Co-Decoding (Conf): AP+NN1 72.0 55.8 62.9 Co-Decoding (Rank): AP+NN1 77.5 59.7 67.4 Co-Decoding (Conf): AP+NN2 68.1 67.1 67.6 Co-Decoding (Rank): AP+NN2 70.9 68.3 69.6 Supervised (173K) 79.8 71.9 75.6 Table 3: In-house NER data: Precision, Recall and F1 score on exact phrasal matches. The highest F1 score among all the weakly supervised approaches is shown in bold. Same for Tables 4 and 5. 6 Experiments In this section, we evaluate the performance of the proposed approaches for cross-lingual NER, including the 2 projection-based approaches and the 2 co-decoding schemes for combining them: (1) The annotation projection (AP) approach with heuristic data selection; (2) The representation projection approach (with two neural network architectures NN1 and NN2); (3) The exclude-O confidence-based co-decoding scheme; (4) The rank-based co-decoding scheme. 6.1 NER Data Sets We have used various NER data sets for evaluation. The first group includes in-house humanannotated newswire NER data for four languages: Japanese, Korean, German and Portuguese, annotated with over 50 entity types. The main motivation of deploying such a fine-grained entity type set is to build cognitive question answering applications on top of the NER systems. The entity type set has been engineered to cover many of the frequent entity types that are targeted by naturallyphrased questions. The sizes of the test data sets are ranging from 30K to 45K tokens. The second group includes open humanannotated newswire NER data for Spanish, Dutch and German from the CoNLL NER data sets (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003). The CoNLL data have 4 entity types: PER (persons), ORG (organizations), LOC (locations) and MISC (miscellaneous entities). The sizes of the development/test data sets are ranging from 35K to 70K tokens. The development data are used for tuning the parameters of learning methods. 6.2 Evaluation for In-House NER Data In Table 3, we show the results of different approaches for the in-house NER data. For annotation projection, the source (English) NER system is a linear-chain CRF model trained with 328K tokens of human-annotated English newswire data. The target-language NER systems are 2nd-order MEMM models trained with 1.3M, 1.5M, 2.6M and 1.5M tokens of projection-labeled data for Japanese, Korean, German and Portuguese, respectively. The projection-labeled data are selected using the heuristic data selection scheme (see Table 2). For representation projection, the source (English) NER systems are neural network models with architectures NN1 and NN2 (see Figure 1), both trained with 328K tokens of humanannotated English newswire data. The results show that the annotation projection (AP) approach has a relatively high precision and low recall. For representation projection, neural network model NN2 (with a smoothing layer) is better than NN1, and NN2 tends to have a more balanced precision and recall. The rank-based codecoding scheme is more effective for combining the two projection-based approaches. In particular, the rank-based scheme that combines AP and NN2 achieves the highest F1 score among all the weakly supervised approaches for Korean, German and Portuguese (second highest F1 score for Japanese), and it improves over the best of the two 1476 Spanish P R F1 Annotation-Projection (AP) 65.5 59.1 62.1 Representation-Projection (NN1) 63.9 52.2 57.4 Representation-Projection (NN2) 55.3 51.8 53.5 Co-Decoding (Conf): AP+NN1 64.3 66.8 65.5 Co-Decoding (Rank): AP+NN1 63.7 65.3 64.5 Co-Decoding (Conf): AP+NN2 58.0 63.9 60.8 Co-Decoding (Rank): AP+NN2 60.8 64.5 62.6 Supervised (264K) 81.3 79.8 80.6 Dutch P R F1 Annotation-Projection (AP) 73.3 63.0 67.8 Representation-Projection (NN1) 82.6 47.4 60.3 Representation-Projection (NN2) 66.3 43.5 52.5 Co-Decoding (Conf): AP+NN1 72.3 66.5 69.3 Co-Decoding (Rank): AP+NN1 72.8 65.3 68.8 Co-Decoding (Conf): AP+NN2 65.3 64.7 65.0 Co-Decoding (Rank): AP+NN2 69.7 66.0 67.8 Supervised (199K) 82.9 81.7 82.3 German P R F1 Annotation-Projection (AP) 71.8 54.7 62.1 Representation-Projection (NN1) 79.4 41.4 54.4 Representation-Projection (NN2) 64.6 42.7 51.4 Co-Decoding (Conf): AP+NN1 70.1 59.5 64.4 Co-Decoding (Rank): AP+NN1 71.0 59.4 64.7 Co-Decoding (Conf): AP+NN2 64.2 59.9 62.0 Co-Decoding (Rank): AP+NN2 66.8 60.6 63.6 Supervised (206K) 81.2 64.3 71.8 Table 4: CoNLL NER development data. projection-based systems by 2.2 to 7.4 F1 score. We also provide the performance of supervised learning where the NER system is trained with human-annotated data in the target language (with size shown in the bracket). While the performance of the weakly supervised systems is not as good as supervised learning, it is important to build weakly supervised systems with decent performance when supervised annotation is unavailable. Even if supervised annotation is feasible, the weakly supervised systems can be used to pre-annotate the data, and we observed that pre-annotation can improve the annotation speed by 40%-60%, which greatly reduces the annotation cost. 6.3 Evaluation for CoNLL NER Data For the CoNLL data, the source (English) NER system for annotation projection is a linearchain CRF model trained with the CoNLL English training data (203K tokens), and the targetlanguage NER systems are 2nd-order MEMM models trained with 1.3M, 7.0M and 1.2M tokens of projection-labeled data for Spanish, Dutch and German, respectively. The projection-labeled data are selected using the heuristic data selection scheme, where the threshold parameters q and n are determined via coordinate search based on the CoNLL development sets. Compared with no data selection, the data selection scheme improves the annotation projection approach by 2.7/2.0/2.7 F1 score on the Spanish/Dutch/German development data. In addition to standard NER features such as n-gram word features, word type features, prefix and suffix features, the target-language NER systems also use the multilingual Wikipedia entity type mappings developed in (Ni and Florian, 2016) to generate dictionary features and as decoding constraints, which improve the annotation projection approach by 3.0/5.4/7.9 F1 score on the Spanish/Dutch/German development data. For representation projection, the source (English) NER systems are neural network models (NN1 and NN2) trained with the CoNLL English training data. Compared with the standard CBOW word2vec model, the concatenated variant improves the representation projection approach (NN1) by 8.9/11.4/6.8 F1 score on the Spanish/Dutch/German development data, as well as by 2.0 F1 score on English. In addition, the frequency-weighted cross-lingual word embedding projection (7) improves the representation projection approach (NN1) by 2.2/6.3/3.7 F1 score on the Spanish/Dutch/German development data, compared with using uniform weights on the same data. We do observe, however, that using uniform weights when keeping only the most frequent translation of a word instead of all word pairs above a threshold in the training dictionary, leads to performance similar to that of the frequencyweighted projection. In Table 4 we show the results for the CoNLL development data. For representation projection, NN1 is better than NN2. Both the annotation projection approach and NN1 tend to have a high precision. In this case, the exclude-O confidencebased co-decoding scheme that combines AP and NN1 achieves the highest F1 score for Spanish and Dutch (second highest F1 score for German), and improves over the best of the two projection-based systems by 1.5 to 3.4 F1 score. In Table 5 we compare our top systems (confidence or rank-based co-decoding of AP and NN1, determined by the development data) with the best results of the cross-lingual NER approaches proposed in T¨ackstr¨om et al. (2012), Nothman et al. (2013) and Tsai et al. (2016) on the CoNLL test data. Our systems outperform the previous stateof-the-art approaches, closing more of the gap to 1477 Spanish P R F1 T¨ackstr¨om et al. (2012) x x 59.3 Nothman et al. (2013) x x 61.0 Tsai et al. (2016) x x 60.6 Co-Decoding (Conf): AP+NN1 64.9 65.2 65.1 Co-Decoding (Rank): AP+NN1 64.6 63.9 64.3 Supervised (264K) 82.5 82.3 82.4 Dutch P R F1 T¨ackstr¨om et al. (2012) x x 58.4 Nothman et al. (2013) x x 64.0 Tsai et al. (2016) x x 61.6 Co-Decoding (Conf): AP+NN1 69.1 62.0 65.4 Co-Decoding (Rank): AP+NN1 69.3 61.0 64.8 Supervised (199K) 85.1 83.9 84.5 German P R F1 T¨ackstr¨om et al. (2012) x x 40.4 Nothman et al. (2013) x x 55.8 Tsai et al. (2016) x x 48.1 Co-Decoding (Conf): AP+NN1 68.5 51.0 58.5 Co-Decoding (Rank): AP+NN1 68.3 50.4 58.0 Supervised (206K) 79.6 65.3 71.8 Table 5: CoNLL NER test data. supervised learning. 7 Related Work The traditional annotation projection approaches (Yarowsky et al., 2001; Zitouni and Florian, 2008; Ehrmann et al., 2011) project NER tags across language pairs using parallel corpora or translations. Wang and Manning (2014) proposed a variant of annotation projection which projects expectations of tags and uses them as constraints to train a model based on generalized expectation criteria. Annotation projection has also been applied to several other cross-lingual NLP tasks, including word sense disambiguation (Diab and Resnik, 2002), part-of-speech (POS) tagging (Yarowsky et al., 2001) and dependency parsing (Rasooli and Collins, 2015). Wikipedia has been exploited to generate weakly labeled multilingual NER training data. The basic idea is to first categorize Wikipedia pages into entity types, either based on manually constructed rules that utilize the category information of Wikipedia (Richman and Schone, 2008) or Freebase attributes (Al-Rfou et al., 2015), or via a classifier trained with manually labeled Wikipedia pages (Nothman et al., 2013). Heuristic rules are then developed in these works to automatically label the Wikipedia text with NER tags. Ni and Florian (2016) built high-accuracy, high-coverage multilingual Wikipedia entity type mappings using weakly labeled data and applied those mappings as decoding constrains or dictionary features to improve multilingual NER systems. For direct NER model transfer, T¨ackstr¨om et al. (2012) built cross-lingual word clusters using monolingual data in source/target languages and aligned parallel data between source and target languages. The cross-lingual word clusters were then used to generate universal features. Tsai et al. (2016) applied the cross-lingual wikifier developed in (Tsai and Roth, 2016) and multilingual Wikipedia dump to generate languageindependent labels (FreeBase types and Wikipedia categories) for n-grams in a document, and those labels were used as universal features. Different ways of obtaining cross-lingual embeddings have been proposed in the literature. One approach builds monolingual representations separately and then brings them to the same space typically using a seed dictionary (Mikolov et al., 2013b; Faruqui and Dyer, 2014). Another line of work builds inter-lingual representations simultaneously, often by generating mixed language corpora using the supervision at hand (aligned sentences, documents, etc.) (Vuli´c and Moens, 2015; Gouws et al., 2015). We opt for the first solution in this paper because of its flexibility: we can map all languages to English rather than requiring separate embeddings for each language pair. Additionally we are able to easily add a new language without any constraints on the type of data needed. Note that although we do not specifically create interlingual representations, by training mappings to the common language, English, we are able to map words in different languages to a common space. Similar approaches for cross-lingual model transfer have been applied to other NLP tasks such as document classification (Klementiev et al., 2012), dependency parsing (Guo et al., 2015) and POS tagging (Gouws and Søgaard, 2015). 8 Conclusion In this paper, we developed two weakly supervised approaches for cross-lingual NER based on effective annotation and representation projection. We also designed two co-decoding schemes that combine the two projection-based systems in an intelligent way. Experimental results show that the combined systems outperform three state-ofthe-art cross-lingual NER approaches, providing a strong baseline for building cross-lingual NER systems with no human annotation in target languages. 1478 References Rami Al-Rfou, Vivek Kulkarni, Bryan Perozzi, and Steven Skiena. 2015. Polyglot-ner: Massive multilingual named entity recognition. In Proceedings of the 2015 SIAM International Conference on Data Mining. SIAM, Vancouver, British Columbia, Canada. https://doi.org/10.1137/1.9781611974010.66. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12:2493–2537. http://dl.acm.org/citation.cfm?id=1953048.2078186. Mona Diab and Philip Resnik. 2002. An unsupervised method for word sense tagging using parallel corpora. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, ACL’02, pages 255–262. https://doi.org/10.3115/1073083.1073126. Maud Ehrmann, Marco Turchi, and Ralf Steinberger. 2011. Building a multilingual named entity-annotated corpus using annotation projection. In Proceedings of Recent Advances in Natural Language Processing. Association for Computational Linguistics, pages 118–124. http://aclweb.org/anthology/R11-1017. Manaal Faruqui and Chris Dyer. 2014. Improving vector space word representations using multilingual correlation. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, Gothenburg, Sweden, pages 462– 471. http://www.aclweb.org/anthology/E14-1049. Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2015. Bilbowa: Fast bilingual distributed representations without word alignments. In Proceedings of the 32nd International Conference on Machine Learning. JMLR Workshop and Conference Proceedings, pages 748–756. http://jmlr.org/proceedings/papers/v37/gouws15.pdf. Stephan Gouws and Anders Søgaard. 2015. Simple task-specific bilingual word embeddings. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Denver, Colorado, pages 1386–1390. http://www.aclweb.org/anthology/N15-1157. Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, and Ting Liu. 2015. Cross-lingual dependency parsing based on distributed representations. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Association for Computational Linguistics, Beijing, China, pages 1234– 1244. http://www.aclweb.org/anthology/P15-1119. Abraham Ittycheriah and Salim Roukos. 2005. A maximum entropy word aligner for arabic-english machine translation. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing. pages 89–96. http://aclweb.org/anthology/H05-1012. Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representations of words. In Proceedings of COLING 2012. The COLING 2012 Organizing Committee, Mumbai, India, pages 1459–1474. http://www.aclweb.org/anthology/C12-1089. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, ICML’01, pages 282–289. http://dl.acm.org/citation.cfm?id=645530.655813. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 260–270. https://doi.org/10.18653/v1/N16-1030. Wang Ling, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015. Two/too simple adaptations of word2vec for syntax problems. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Denver, Colorado, pages 1299–1304. http://www.aclweb.org/anthology/N151142. Andrew McCallum, Dayne Freitag, and Fernando C. N. Pereira. 2000. Maximum entropy markov models for information extraction and segmentation. In Proceedings of the Seventeenth International Conference on Machine Learning. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, ICML’00, pages 591–598. http://dl.acm.org/citation.cfm?id=645529.658277. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. CoRR abs/1301.3781. http://arxiv.org/abs/1301.3781. Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013b. Exploiting similarities among languages for machine translation. CoRR abs/1309.4168. http://arxiv.org/abs/1309.4168. 1479 David Nadeau and Satoshi Sekine. 2007. A survey of named entity recognition and classification. Linguisticae Investigationes 30(1):3–26. Publisher: John Benjamins Publishing Company. https://doi.org/10.1075/li.30.1.03nad. Jian Ni and Radu Florian. 2016. Improving multilingual named entity recognition with wikipedia entity type mapping. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1275–1284. https://doi.org/10.18653/v1/D16-1135. Eric W. Noreen. 1989. Computer-Intensive Methods for Testing Hypotheses: An Introduction. John Wiley & Sons, Inc., New York, NY, USA. Joel Nothman, Nicky Ringland, Will Radford, Tara Murphy, and James R. Curran. 2013. Learning multilingual named entity recognition from wikipedia. Journal of Artificial Intelligence 194:151–175. https://doi.org/10.1016/j.artint.2012.03.006. Sadegh Mohammad Rasooli and Michael Collins. 2015. Density-driven cross-lingual transfer of dependency parsers. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 328–338. https://doi.org/10.18653/v1/D15-1039. E. Alexander Richman and Patrick Schone. 2008. Mining wiki resources for multilingual named entity recognition. In Proceedings of ACL-08: HLT. Association for Computational Linguistics, pages 1–9. http://aclweb.org/anthology/P08-1001. Oscar T¨ackstr¨om, Ryan McDonald, and Jakob Uszkoreit. 2012. Cross-lingual word clusters for direct transfer of linguistic structure. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 477–487. http://aclweb.org/anthology/N12-1052. Erik F. Tjong Kim Sang. 2002. Introduction to the conll-2002 shared task: Language-independent named entity recognition. In Proceedings of the Sixth Conference on Natural Language Learning Volume 20. Association for Computational Linguistics, Stroudsburg, PA, USA, CONLL’02, pages 1–4. https://doi.org/10.3115/1118853.1118877. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003 - Volume 4. Association for Computational Linguistics, Stroudsburg, PA, USA, CONLL’03, pages 142–147. https://doi.org/10.3115/1119176.1119195. Chen-Tse Tsai, Stephen Mayhew, and Dan Roth. 2016. Cross-lingual named entity recognition via wikification. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning. Association for Computational Linguistics, pages 219–228. https://doi.org/10.18653/v1/K16-1022. Chen-Tse Tsai and Dan Roth. 2016. Cross-lingual wikification using multilingual embeddings. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 589– 598. https://doi.org/10.18653/v1/N16-1072. Ivan Vuli´c and Marie-Francine Moens. 2015. Bilingual word embeddings from non-parallel documentaligned data applied to bilingual lexicon induction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Association for Computational Linguistics, Beijing, China, pages 719–725. http://www.aclweb.org/anthology/P15-2118. Mengqiu Wang and D. Christopher Manning. 2014. Cross-lingual projected expectation regularization for weakly supervised learning. Transactions of the Association of Computational Linguistics 2:55–66. http://aclweb.org/anthology/Q14-1005. David Yarowsky, Grace Ngai, and Richard Wicentowski. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. In Proceedings of the First International Conference on Human Language Technology Research. Association for Computational Linguistics, Stroudsburg, PA, USA, HLT’01, pages 1–8. https://doi.org/10.3115/1072133.1072187. Imed Zitouni and Radu Florian. 2008. Mention detection crossing the language barrier. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 600–609. http://aclweb.org/anthology/D08-1063. 1480
2017
135
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1481–1491 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1136 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1481–1491 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1136 Context Sensitive Lemmatization Using Two Successive Bidirectional Gated Recurrent Networks Abhisek Chakrabarty Onkar Arun Pandit Computer Vision and Pattern Recognition Unit Indian Statistical Institute 203 B.T. Road, Kolkata-700108, India [email protected], [email protected], [email protected] Utpal Garain Abstract We introduce a composite deep neural network architecture for supervised and language independent context sensitive lemmatization. The proposed method considers the task as to identify the correct edit tree representing the transformation between a word-lemma pair. To find the lemma of a surface word, we exploit two successive bidirectional gated recurrent structures - the first one is used to extract the character level dependencies and the next one captures the contextual information of the given word. The key advantages of our model compared to the state-of-the-art lemmatizers such as Lemming and Morfette are - (i) it is independent of human decided features (ii) except the gold lemma, no other expensive morphological attribute is required for joint learning. We evaluate the lemmatizer on nine languages - Bengali, Catalan, Dutch, Hindi, Hungarian, Italian, Latin, Romanian and Spanish. It is found that except Bengali, the proposed method outperforms Lemming and Morfette on the other languages. To train the model on Bengali, we develop a gold lemma annotated dataset1 (having 1, 702 sentences with a total of 20, 257 word tokens), which is an additional contribution of this work. 1 Introduction Lemmatization is the process to determine the root/dictionary form of a surface word. Morphologically rich languages suffer due to the existence 1The dataset and the code of model architecture are released with the paper. They are also available in http: //www.isical.ac.in/˜utpal/resources.php of various inflectional and derivational variations of a root depending on several linguistic properties such as honorificity, parts of speech (POS), person, tense etc. Lemmas map the related word forms to lexical resources thus identifying them as the members of the same group and providing their semantic and syntactic information. Stemming is a way similar to lemmatization producing the common portion of variants but it has several limitations - (i) there is no guarantee of a stem to be a legitimate word form (ii) words are considered in isolation. Hence, for context sensitive languages i.e. where same inflected word form may come from different sources and can only be disambiguated by considering its neighbouring information, there lemmatization defines the foremost task to handle diverse text processing problems (e.g. sense disambiguation, parsing, translation). The key contributions of this work are as follows. We address context sensitive lemmatization introducing a two-stage bidirectional gated recurrent neural network (BGRNN) architecture. Our model is a supervised one that needs lemma tagged continuous text to learn. Its two most important advantages compared to the state-ofthe-art supervised models (Chrupala et al., 2008; Toutanova and Cherry, 2009; Gesmundo and Samardzic, 2012; M¨uller et al., 2015) are - (i) we do not need to define hand-crafted features such as the word form, presence of special characters, character alignments, surrounding words etc. (ii) parts of speech and other morphological attributes of the surface words are not required for joint learning. Additionally, unknown word forms are also taken care of as the transformation between word-lemma pair is learnt, not the lemma itself. We exploit two steps learning in our method. At first, characters in the words are passed sequentially through a BGRNN to get a syntactic embedding of each word and then the outputs are 1481 combined with the corresponding semantic embeddings. Finally, mapping between the combined embeddings to word-lemma transformations are learnt using another BGRNN. For the present work, we assess our model on nine languages having diverse morphological variations. Out of them, two (Bengali and Hindi) belong to the Indic languages family and the rests (Catalan, Dutch, Hungarian, Italian, Latin, Romanian and Spanish) are taken from the European languages. To evaluate the proposed model on Bengali, a lemma annotated continuous text has been developed. As so far there is no such standard large dataset for supervised lemmatization in Bengali, the prepared one would surely contribute to the respective NLP research community. For the remaining languages, standard datasets are used for experimentation. Experimental results reveal that our method outperforms Lemming (M¨uller et al., 2015) and Morfette (Chrupala et al., 2008) on all the languages except Bengali. 1.1 Related Works Efforts on developing lemmatizers can be divided into two principle categories (i) rule/heuristics based approaches (Koskenniemi, 1984; Plisson et al., 2004) which are usually not portable to different languages and (ii) learning based methods (Chrupala et al., 2008; Toutanova and Cherry, 2009; Gesmundo and Samardzic, 2012; M¨uller et al., 2015; Nicolai and Kondrak, 2016) requiring prior training dataset to learn the morphological patterns. Again, the later methods can be further classified depending on whether context of the current word is considered or not. Lemmatization without context (Cotterell et al., 2016; Nicolai and Kondrak, 2016) is closer to stemming and not the focus of the present work. It is noteworthy here that the supervised lemmatization methods do not try to classify the lemma of a given word form as it is infeasible due to having a large number of lemmas in a language. Rather, learning the transformation between word-lemma pair is more generalized and it can handle the unknown word forms too. Several representations of wordlemma transformation have been introduced so far such as shortest edit script (SES), label set, edit tree by Chrupala et al. (2008), Gesmundo and Samardzic (2012) and M¨uller et al. (2015) respectively. Following M¨uller et al. (2015), we consider lemmatization as the edit tree classification problem. Toutanova and Cherry (2009); M¨uller et al. (2015) also showed that joint learning of lemmas with other morphological attributes is mutually beneficial but obtaining the gold annotated datasets is very expensive. In contrast, our model needs only lemma annotated continuous text (not POS and other tags) to learn the word morphology. Since our experiments include the Indic languages also, it would not be an overstatement to say that there have been little efforts on lemmatization so far (Faridee et al., 2009; Loponen and J¨arvelin, 2010; Paul et al., 2013; Bhattacharyya et al., 2014). The works by Faridee et al. (2009); Paul et al. (2013) are language specific rule based for Bengali and Hindi respectively. (Loponen and J¨arvelin, 2010)’s primary objective was to improve the retrieval performance. Bhattacharyya et al. (2014) proposed a heuristics based lemmatizer using WordNet but they did not consider context of the target word which is an important basis to lemmatize Indic languages. Chakrabarty and Garain (2016) developed an unsupervised language independent lemmatizer and evaluated it on Bengali. They consider the contextual information but the major disadvantage of their method is dependency on dictionary as well as POS information. Very recently, a supervised neural lemmatization model has been introduced by Chakrabarty et al. (2016). They treat the problem as lemma transduction rather than classification. The particular root in the dictionary is chosen as the lemma with which the transduced vector possesses maximum cosine similarity. Hence, their approach fails when the correct lemma of a word is not present in the dictionary. Besides, the lemmatization accuracy obtained by the respective method is not very significant. Apart from the mentioned works, there is no such commendable effort so far. Rest of this paper is organized as follows. In section 2, we describe the proposed lemmatization method. Experimental setup and the results are presented in section 3. Finally, in section 4 we conclude the paper. 2 The Proposed Method As stated earlier in section 1.1, we represent the mapping between a word to its lemma using edit tree (Chrupała, 2008; M¨uller et al., 2015). An edit tree embeds all the necessary edit operations within it i.e. insertions, deletions and substitutions of strings required throughout the transformation 1482 Figure 1: Edit trees for the word-lemma pairs ‘sang-sing’ and ‘achieving-achieve’. process. Figure 1 depicts two edit trees that map the inflected English words ‘sang’ and ‘achieving’ to their respective lemmas ‘sing’ and ‘achieve’. For generalization, edit trees encode only the substitutions and the length of prefixes and suffixes of the longest common substrings. Initially, all unique edit trees are extracted from the associated surface word-lemma pairs present in the training set. The extracted trees refer to the class labels in our model. So, for a test word, the goal is to classify the correct edit tree which, applied on the word, returns the lemma. Next, we will describe the architecture of the proposed neural lemmatization model. It is evident that for morphologically rich languages, both syntactic and semantic knowledge help in lemmatizing a surface word. Now a days, it is a common practice to embed the functional properties of words into vector representations. Despite the word vectors prove very effectual in semantic processing tasks, they are modelled using the distributional similarity obtained from a raw corpus. Morphological regularities, local and non-local dependencies in character sequences that play deciding roles to find the lemmas, are not taken into account where each word has its own vector interpretation. We address this issue by incorporating two different embeddings into our model. Semantic embedding is achieved using word2vec (Mikolov et al., 2013a,b), which has been empirically found highly successful. To devise the syntactic embedding of a word, we follow the work of Ling et al. (2015) that uses compositional character to word model using bidirectional long-short term memory (BLSTM) network. In our experiments, different Figure 2: Syntactic vector composition for a word. gated recurrent cells such as LSTM (Graves, 2013) and GRU (Cho et al., 2014), are explored. The next subsection describes the module to construct the syntactic vectors by feeding the character sequences into BGRNN architecture. 2.1 Forming Syntactic Embeddings Our goal is to build syntactic embeddings of words that capture the similarities in morphological level. Given an input word w, the target is to obtain a d dimensional vector representing the syntactic structure of w. The procedure is illustrated in Figure 2. At first, an alphabet of characters is defined as C. We represent w as a sequence of characters c1, . . . , cm where m is the word length and each character ci is defined as a one hot encoded vector 1ci, having one at the index of ci in the alphabet C. An embedding layer is defined as Ec ∈Rdc×|C|, that projects each one hot encoded character vector to a dc dimensional embedded vector. For a character ci, its projected vector eci is obtained from the embedding layer Ec, using this relation eci = Ec · 1ci where ‘·’ is the matrix multiplication operation. Given a sequence of vectors x1, . . . , xm as input, a LSTM cell computes the state sequence h1, . . . , hm using the following equations: ft = σ(Wfxt + Ufht−1 + Vfct−1 + bf) it = σ(Wixt + Uiht−1 + Vict−1 + bi) ct = ft ⊙ct−1 + it ⊙tanh(Wcxt + Ucht−1 + bc) ot = σ(Woxt + Uoht−1 + Voct + bo) ht = ot ⊙tanh(ct), 1483 Whereas, the updation rules for GRU are as follows zt = σ(Wzxt + Uzht−1 + bz) rt = σ(Wrxt + Urht−1 + br) ht = (1 −zt) ⊙ht−1 + zt ⊙tanh(Whxt + Uh(rt ⊙ht−1) + bh), σ denotes the sigmoid function and ⊙stands for the element-wise (Hadamard) product. Unlike the simple recurrent unit, LSTM uses an extra memory cell ct that is controlled by three gates - input (it), forget (ft) and output (ot). it controls the amount of new memory content added to the memory cell, ft regulates the degree to which the existing memory is forgotten and ot finally adjusts the memory content exposure. W, U, V (weight matrices), b (bias) are the parameters. Without having a memory cell like LSTM, a GRU uses two gates namely update (zt) and reset (rt). The gate, zt decides the amount of update needed for activation and rt is used to ignore the previous hidden states (when close to 0, it forgets the earlier computation). So, for a sequence of projected characters ec1, . . . , ecm, the forward and the backward networks produce the state sequences hf 1, . . . , hf m and hb m, . . . , hb 1 respectively. Finally, we obtain the syntactic embedding of w, denoted as esyn w , by concatenating the final states of these two sequences. esyn w = [hb 1, hf m] 2.2 Model We present the sketch of the final integrated model in Figure 3. For a word w, let esem w denotes its semantic embedding obtained using word2vec. Both the vectors, esyn w and esem w are concatenated together to shape the composite representation ecom w which carries the morphological and distributional information within it. Firstly, for all the words present in the training set, their composite vectors are generated. Next, they are fed sentencewise into the next level of BGRNN to train the model for the edit tree classification task. This second level bidirectional network accounts the local context in both forward and backward directions, which is essential for lemmatization in context sensitive languages. Let, ecom w1 , . . . , ecom wn be the input sequence of composite vectors to the BGRNN model, representing a sentence having n words w1, . . . , wn. For the ith vector ecom wi , hf i and Figure 3: Second level BGRNN model for edit tree classification. hb i denote the forward and backward states respectively carrying the informations of w1, . . . , wi and wi, . . . , wn. 2.2.1 Incorporating Applicable Edit Trees Information One aspect that we did not look into so far, is that for a word all unique edit trees extracted from the training set are not applicable as this would lead to incompatible substitutions. For example, the edit tree for the word-lemma pair ‘sang-sing’ depicted in Figure 1, cannot be applied on the word ‘achieving’. This information is prior before training the model i.e. for any arbitrary word, we can sort out the subset of unique edit trees from the training samples in advance, which are applicable on it. In general, if all the unique edit trees in the training data are set as the class labels, the model will learn to distribute the probability mass over all the classes which is a clear-cut bottleneck. In order to alleviate this problem, we take a novel strategy so that for individual words in the input sequence, the model will learn, to which classes, the output probability should be apportioned. Let T = {t1, . . . , tk} be the set of distinct edit trees found in the training set. For the word wi in the input sequence w1, . . . , wn, we define its applicable edit trees vector as Ai = (a1 i , . . . , ak i ) where ∀j ∈{1, . . . , k}, aj i = 1 if tj is applicable for wi, otherwise 0. Hence, Ai holds the information regarding the set of edit trees to concentrate 1484 upon, while processing the word wi. We combine Ai together with hf i and hb i for the final classification task as following, li = softplus(Lfhf i + Lbhb i + LaAi + bl), where ‘softplus’ denotes the activation function f(x) = ln(1 + ex) and Lf, Lb, La and bl are the parameters trained by the network. At the end, li is passed through the softmax layer to get the output labels for wi. To pick the correct edit tree from the output of the softmax layer, we exploit the prior information Ai. Instead of choosing the class that gets the maximum probability, we select the maximum over the classes corresponding to the applicable edit trees. The idea is expressed as follows. Let Oi = (o1 i , . . . , ok i ) be the output of the softmax layer. Instead of opting for the maximum over o1 i , . . . , ok i as the class label, the highest probable class out of those corresponding to the applicable edit trees, is picked up. That is, the particular edit tree tj ∈T is considered as the right candidate for wi, where j = argmaxj′∈{1,...,k} ∧aj′ i =1 oj′ i In this way, we cancel out the non-applicable classes and focus only on the plausible candidates. 3 Experimentation Out of the nine reference languages, initially we choose four of them (Bengali, Hindi, Latin and Spanish) for in-depth analysis. We conduct an exhaustive set of experiments - such as determining the direct lemmatization accuracy, accuracy obtained without using applicable edit trees in training, measuring the model’s performance on the unseen words etc. on these four languages. Later we consider five more languages (Catalan, Dutch, Hungarian, Italian and Romanian) mostly for testing the generalization ability of the proposed method. For these additional languages, we present only the lemmatization accuracy in section 3.2. Datasets: As Bengali is a low-resourced language, a relatively large lemma annotated dataset is prepared for the present work using Tagore’s short stories collection2 and randomly selected news articles from miscellaneous domains. One 2www.rabindra-rachanabali.nltr.org # Sentences # Word Tokens Bengali 1,702 20,257 Hindi 36,143 819,264 Latin 15,002 165,634 Spanish 15,984 477,810 Table 1: Dataset statistics of the 4 languages. linguist took around 2 months to complete the annotation which was checked by another person and differences were sorted out. Out of the 91 short stories of Tagore, we calculate the value of (# tokens / # distinct tokens) for each story. Based on this value (lower is better), top 11 stories are selected. The news articles3 are crafted from the following domains: animal, archaeology, business, country, education, food, health, politics, psychology, science and travelogue. In Hindi, we combine the COLING’12 shared task data for dependency parsing and Hindi WSD health and tourism corpora4 (Khapra et al., 2010) together5. For Latin, the data is taken from the PROIEL treebank (Haug and Jøhndal, 2008) and for Spanish, we merge the training and development datasets of CoNLL’09 (Hajiˇc et al., 2009) shared task on syntactic and semantic dependencies. The dataset statistics are given in Table 1. We assess the lemmatization performance by measuring the direct accuracy which is the ratio of the number of correctly lemmatized words to the total number of input words. The experiments are performed using 4 fold cross validation technique i.e. the datasets are equi-partitioned into 4 parts at sentence level and then each part is tested exactly once using the model trained on the remaining 3 parts. Finally, we report the average accuracy over 4 fold. Induction of Edit Tree Set: Initially, distinct edit trees are induced from the word-lemma pairs present in the training set. Next, the words in the training data are annotated with their corresponding edit trees. Training is accomplished on this edit tree tagged text. Figure 4 plots the growth of the edit tree set against the number of word-lemma samples in the four languages. With the increase of samples, the size of edit tree set gradually converges revealing the fact that most of the frequent transformation patterns (both regular and irregular) are covered by the induction process. From 3http://www.anandabazar.com/ 4http://www.cfilt.iitb.ac.in/wsd/ annotated_corpus/ 5We also release the Hindi dataset with this paper as it is a combination of two different datasets. 1485 500 1000 1500 2000 2500 3000 3500 10000 20000 30000 40000 50000 60000 70000 80000 90000 100000 Number of distinct edit trees Number of word-lemma samples Bengali Hindi Latin Spanish Figure 4: Increase of the edit tree set size with the number of word-lemma samples. Figure 4, morphological richness can be compared across the languages. When convergence happens quickly i.e. at relatively less number of samples, it evidences that the language is less complex. Among the four reference languages, Latin stands out as the most intricate, followed by Bengali, Spanish and Hindi. Semantic Embeddings: We obtain the distributional word vectors for Bengali and Hindi by training the word2vec model on FIRE Bengali and Hindi news corpora6. Following the work by Mikolov et al. (2013a), continuous-bag-ofwords architecture with negative sampling is used to get 200 dimensional word vectors. For Latin and Spanish, we use the embeddings released by Bamman and Smith (2012)7 and Cardellino (2016)8 respectively. Syntactic Representation: We acquire the statistics of word length versus frequency from the datasets and find out that irrespective of the languages, longer words (have more than 20-25 characters) are few in numbers. Based on this finding, each word is limited to a sequence of 25 characters. Smaller words are padded null characters at the end and for the longer words, excess characters are truncated out. So, each word is represented as a 25 length array of one hot encoded vectors which is given input to the embedding layer that works as a look up table producing an equal length array of embedded vectors. Initialization of the embedding layer is done randomly and the embedded vector dimension is set to 10. Eventually, the output of the embedding layer is passed to the first 6http://fire.irsi.res.in/fire 7http://www.cs.cmu.edu/˜dbamman/latin. html 8http://crscardellino.me/SBWCE/ level BGRNN for learning the syntactic representation. Hyper Parameters: There are several hyper parameters in our model such as the number of neurons in the hidden layer (ht) of both first and second level BGRNN, learning mode, number of epochs to train the models, optimization algorithm, dropout rate etc. We experiment with different settings of these parameters and report where optimum results are achieved. For both the bidirectional networks, number of hidden layer neurons is set to 64. Online learning is applied for updation of the weights. Number of epochs varies across languages to converge the training. It is maximum for Bengali (around 80 epochs), followed by Latin, Spanish and Hindi taking around 50, 35 and 15 respectively. Throughout the experiments, we set the dropout rate as 0.2 to prevent over-fitting. Different optimization algorithms like AdaDelta (Zeiler, 2012), Adam (Kingma and Ba, 2014), RMSProp (Dauphin et al., 2015) are explored. Out of them, Adam yields the best result. We use the categorical cross-entropy as the loss function in our model. Baselines: We compare our method with Lemming9 and Morfette10. Both the model jointly learns lemma and other morphological tags in context. Lemming uses a 2nd-order linear-chain CRF to predict the lemmas whereas, the current version of Morfette is based on structured perceptron learning. As POS information is a compulsory requirement of these two models, the Bengali data is manually POS annotated. For the other languages, the tags were already available. Although this comparison is partially biased as the proposed method does not need POS information, but the experimental results show the effectiveness of our model. There is an option in Lemming and Morfette to provide an exhaustive set of root words which is used to exploit the dictionary features i.e. to verify if a candidate lemma is a valid form or not. To make the comparisons consistent, we do not exploit any external dictionary in our experiments. 3.1 Results The lemmatization results are presented in Table 2. We explore our proposed model with two types of gated recurrent cells - LSTM and GRU. As there 9http://cistern.cis.lmu.de/lemming/ 10https://github.com/gchrupala/morfette 1486 Bengali Hindi Latin Spanish BLSTM-BLSTM 90.84/91.14 94.89/94.90 89.35/89.52 97.85/97.91 BGRU-BGRU 90.63/90.84 94.44/94.50 89.40/89.59 98.07/98.11 Lemming 91.69 91.64 88.50 93.12 Morfette 90.69 90.57 87.10 92.90 Table 2: Lemmatization accuracy (in %) without/with restricting output classes. Bengali Hindi Latin Spanish BLSTM-BLSTM 86.46/89.52 94.34/94.52 85.70/87.35 97.39/97.62 BGRU-BGRU 86.39/88.90 93.84/94.04 85.49/86.87 97.51/97.73 Table 3: Lemmatization accuracy (in %) without using applicable edit trees in training. are two successive bidirectional networks - the first one for building the syntactic embedding and the next one for the edit tree classification, so basically we deal with two different models BLSTMBLSTM and BGRU-BGRU. Table 2 shows the comparison results of these models with Lemming and Morfette. In all cases, the average accuracy over 4 fold cross validation on the datasets is reported. For an entry ‘x/y’ in Table 2, x denotes the accuracy without output classes restriction, i.e. taking the maximum over all edit tree classes present in the training set, whereas y refers to the accuracy when output is restricted in only the applicable edit tree classes of the input word. Except for Bengali, the proposed models outperform the baselines for the other three languages. In Hindi, BLSTM-BLSTM gives the best result (94.90%). For Latin and Spanish, the highest accuracy is achieved by BGRU-BGRU (89.59% and 98.11% respectively). In the Bengali dataset, Lemming produces the optimum result (91.69%) beating its closest performer BLSTM-BLSTM by 0.55%. It is to note that the training set size in Bengali is smallest compared to the other languages (on average, 16, 712 tokens in each of the 4 folds). Overall, BLSTM-BLSTM and BGRU-BGRU perform equally good. For Bengali and Hindi, the former model is better and for Latin and Spanish, the later yields more accuracy. Throughout the experiments, restricting the output over applicable classes improves the performance significantly. The maximum improvements we get are: 0.30% in Bengali using BLSTM-BLSTM (from 90.84% to 91.14%), 0.06% in Hindi using BGRU-BGRU (from 94.44% to 94.50%), 0.19% in Latin using BGRU-BGRU (from 89.40% to 89.59%) and 0.06% in Spanish using BLSTM-BLSTM (from 97.85% to 97.91%). To compare between the two baselines, Lemming consistently performs better Bengali Hindi Latin Spanish 27.17 5.25 15.74 7.54 Table 4: Proportion of unknown word forms (in %) present in the test sets. than Morfette (the maximum difference between their accuracies is 1.40% in Latin). Effect of Training without Applicable Edit Trees: We also explore the impact of applicable edit trees in training. To see the effect, we train our model without giving the applicable edit trees information as input. In the model design, the equation for the final classification task is changed as follows, li = softplus(Lfhf i + Lbhb i + bl), The results are presented in Table 3. Except for Spanish, BLSTM-BLSTM outperforms BGRUBGRU in all the other languages. As compared with the results in Table 2, for every model, training without applicable edit trees degrades the lemmatization performance. In all cases, BGRUBGRU model gets more affected than BLSTMBLSTM. Language-wise, the drops in its accuracy are: 1.94% in Bengali (from 90.84% to 88.90%), 0.46% in Hindi (from 94.50% to 94.04%), 2.72% in Latin (from 89.59% to 86.87%) and 0.38% in Spanish (from 98.11% to 97.73%). One important finding to note in Table 3 is that irrespective of any particular language and model used, the amount of increase in accuracy due to the output restriction on the applicable classes is much more than that observed in Table 2. For instance, in Table 2 the accuracy improvement for Bengali using BLSTM-BLSTM is 0.30% (from 90.84% to 91.14%), whereas in Table 3 the corresponding value is 3.06% (from 86.46% to 89.52%). These outcomes signify the fact that training with the ap1487 Bengali Hindi Latin Spanish BLSTM-BLSTM 71.06/72.10 87.80/88.18 60.85/61.63 88.06/88.79 BGRU-BGRU 70.44/71.22 88.34/88.40 60.65/61.52 91.48/92.25 Lemming 74.10 90.35 57.19 58.89 Morfette 70.27 88.59 47.41 57.61 Table 5: Lemmatization accuracy (in %) on unseen words. Bengali Hindi Latin Spanish BLSTM-BLSTM 56.16/66.26 87.42/88.41 49.80/56.05 86.22/87.97 BGRU-BGRU 59.45/66.84 87.19/88.26 50.24/55.35 86.74/88.49 Table 6: Lemmatization accuracy (in %) on unseen words without using applicable edit trees in training. plicable edit trees already learns to dispense the output probability to the legitimate classes over which, output restriction cannot yield much enhancement. Results for Unseen Word Forms: Next, we discuss about the lemmatization performance on those words which were absent in the training set. Table 4 shows the proportion of unseen forms averaged over 4 folds on the datasets. In Table 5, we present the accuracy obtained by our models and the baselines. For Bengali and Hindi, Lemming produces the best results (74.10% and 90.35%). For Latin and Spanish, BLSTM-BLSTM and BGRU-BGRU obtain the highest accuracy (61.63% and 92.25%) respectively. In Spanish, our model gets the maximum improvement over the baselines. BGRU-BGRU beats Lemming with 33.36% margin (on average, out of 9, 011 unseen forms, 3, 005 more tokens are correctly lemmatized). Similar to the results in Table 2, the results in Table 5 evidences that restricting the output in applicable classes enhances the lemmatization performance. The maximum accuracy improvements due to the output restriction are: 1.04% in Bengali (from 71.06% to 72.10%), 0.38% in Hindi (from 87.80% to 88.18%) using BLSTM-BLSTM and 0.87% in Latin (from 60.65% to 61.52%), 0.77% in Spanish (from 91.48% to 92.25%) using BGRU-BGRU. Further, we investigate the performance of our models trained without the applicable edit trees information, on the unseen word forms. The results are given in Table 6. As expected, for every model, the accuracy drops compared to the results shown in Table 5. The only exception that we find out is in the entry for Hindi with BLSTM-BLSTM. Though without restricting the output, the accuracy in Table 5 (87.80%) is higher than the corresponding value in Table 6 (87.42%), but after outSem. Embedding Syn. Embedding Bengali 90.76/91.02 86.61/86.82 Hindi 94.86/94.86 91.24/91.25 Latin 88.90/89.09 85.31/85.49 Spanish 97.95/98 96.07/96.10 Table 7: Results (in %) obtained using semantic and syntactic embeddings separately. # Sentences # Word Tokens Catalan 14,832 474,069 Dutch 13,050 197,925 Hungarian 1,351 31,584 Italian 13,402 282,611 Romanian 8,795 202,187 Table 8: Dataset statistics of the 5 additional languages. put restriction, the performance changes (88.18% in Table 5, 88.41% in Table 6) which reveals that only selecting the maximum probable class over the applicable ones would be a better option for the unseen word forms in Hindi. Effects of Semantic and Syntactic Embeddings in Isolation: To understand the impact of the combined word vectors on the model’s performance, we measure the accuracy experimenting with each one of them separately. While using the semantic embedding, only distributional word vectors are used for edit tree classification. On the other hand, to test the effect of the syntactic embedding exclusively, output from the character level recurrent network is fed to the second level BGRNN. We present the results in Table 7. For Bengali and Hindi, experiments are carried out with the BLSTM-BLSTM model as it gives better results for these languages compared to BGRU-BGRU (given in Table 2). Similarly for Latin and Spanish, the results obtained from BGRU-BGRU are reported. From the outcome of these experiments, use of semantic vec1488 Catalan Dutch Hungarian Italian Romanian BLSTM-BLSTM 97.93/97.95 93.20/93.44 91.03/91.46 96.06/96.09 94.25/94.32 Lemming 89.80 86.95 87.95 92.51 93.34 Morfette 89.46 86.62 86.52 92.02 94.13 Table 9: Lemmatization accuracy (in %) for the 5 languages. tor proves to be more effective than the character level embedding. However, to capture the distributional properties of words efficiently, a huge corpus is needed which may not be available for low resourced languages. In that case, making use of syntactic embedding is a good alternative. Nonetheless, use of both types of embedding together improves the result. 3.2 Experimental Results for Another Five Languages As mentioned earlier, five additional languages (Catalan, Dutch, Hungarian, Italian and Romanian) are considered to test the generalization ability of the method. The datasets are taken from the UD Treebanks11 (Nivre et al., 2017). For each language, we merge the training and development data together and perform 4 fold cross validation on it to measure the average accuracy. The dataset statistics are shown in Table 8. For experimentation, we use the pre-trained semantic embeddings released by (Bojanowski et al., 2016). Only BLSTM-BLSTM model is explored and it is compared with Lemming and Morfette. The hyper parameters are kept same as described previously except for the number of epochs needed for training across the languages. We present the results in Table 9. For all the languages, BLSTM-BLSTM outperforms Lemming and Morfette. The maximum improvement over the baselines we get is for Catalan (beats Lemming and Morfette by 8.15% and 8.49% respectively). Similar to the results in Table 2, restricting the output over applicable classes yields consistent performance improvement. 4 Conclusion This article presents a neural network based context sensitive lemmatization method which is language independent and supervised in nature. The proposed model learns the transformation patterns between word-lemma pairs and hence, can handle the unknown word forms too. Additionally, it does not rely on human defined features and various 11http://universaldependencies.org/ morphological tags except the gold lemma annotated continuous text. We explore different variations of the model architecture by changing the type of recurrent units. For evaluation, nine languages are taken as the references. Except Bengali, the proposed method outperforms the stateof-the-art models (Lemming and Morfette) on all the other languages. For Bengali, it produces the second best performance (91.14% using BLSTMBLSTM). We measure the accuracy on the partial data (keeping the data size comparable to the Bengali dataset) for Hindi, Latin and Spanish to check the effect of the data amount on the performance. For Hindi, the change in accuracy is insignificant but for Latin and Spanish, accuracy drops by 3.50% and 6% respectively. The time requirement of the proposed method is also analyzed. Training time depends on several parameters such as size of the data, number of epochs required for convergence, configuration of the system used etc. In our work, we use the ‘keras’ software keeping ‘theano’ as backend. The codes were run on a single GPU (Nvidia GeForce GTX 960, 2GB memory). Once trained, the model takes negligible time to predict the appropriate edit trees for test words (e.g. 844 and 930 words/second for Bengali and Hindi respectively). We develop a Bengali lemmatization dataset which is definitely a notable contribution to the language resources. From the present study, one important finding comes out that for the unseen words, the lemmatization accuracy drops by a large margin in Bengali and Spanish, which may be the area of further research work. Apart from it, we intend to propose a neural architecture that accomplishes the joint learning of lemmas with other morphological attributes. References David Bamman and David Smith. 2012. Extracting two thousand years of latin from a million book library. J. Comput. Cult. Herit. 5(1):2:1–2:13. https://doi.org/10.1145/2160165.2160167. Pushpak Bhattacharyya, Ankit Bahuguna, Lavita Talukdar, and Bornali Phukan. 2014. Facilitating multi-lingual sense annotation: Human mediated 1489 lemmatizer. In Heili Orav, Christiane Fellbaum, and Piek Vossen, editors, Proceedings of the Seventh Global Wordnet Conference. Tartu, Estonia, pages 224–231. http://www.aclweb.org/anthology/W140130. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. arXiv preprint arXiv:1607.04606 https://arxiv.org/abs/1607.04606. Cristian Cardellino. 2016. Spanish Billion Words Corpus and Embeddings. http://crscardellino.me/SBWCE/. Abhisek Chakrabarty, Akshay Chaturvedi, and Utpal Garain. 2016. A neural lemmatizer for bengali. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016). European Language Resources Association (ELRA), Paris, France, pages 2558–2561. http://www.lrecconf.org/proceedings/lrec2016/pdf/955P aper.pdf. Abhisek Chakrabarty and Utpal Garain. 2016. Benlem (a bengali lemmatizer) and its role in wsd. ACM Trans. Asian Low-Resour. Lang. Inf. Process. 15(3):12:1–12:18. https://doi.org/10.1145/2835494. Kyunghyun Cho, Bart Van Merri¨enboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259 https://arxiv.org/abs/1409.1259. Grzegorz Chrupała. 2008. Towards a machinelearning architecture for lexical functional grammar parsing. Ph.D. thesis, Dublin City University. http://doras.dcu.ie/550/. Grzegorz Chrupala, Georgiana Dinu, and Josef van Genabith. 2008. Learning morphology with morfette. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC’08). European Language Resources Association (ELRA), Marrakech, Morocco. http://www.lrecconf.org/proceedings/lrec2008/pdf/594paper.pdf. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016. The sigmorphon 2016 shared task— morphological reinflection. In Proceedings of the 2016 Meeting of SIGMORPHON. Association for Computational Linguistics, Berlin, Germany. http://aclweb.org/anthology/sigmorphon.html. Yann N. Dauphin, Harm de Vries, and Yoshua Bengio. 2015. Equilibrated adaptive learning rates for non-convex optimization. In Proceedings of the 28th International Conference on Neural Information Processing Systems. MIT Press, Cambridge, MA, USA, NIPS’15, pages 1504–1512. http://dl.acm.org/citation.cfm?id=2969239.2969407. Abu Zaher Md Faridee, Francis M Tyers, et al. 2009. Development of a morphological analyser for bengali. In Proceedings of the First International Workshop on Free/Open-Source Rule-Based Machine Translation. Universidad de Alicante. Departamento de Lenguajes y Sistemas Inform´aticos, pages 43– 50. http://www.mt-archive.info/FreeRBMT-2009Faridee.pdf. Andrea Gesmundo and Tanja Samardzic. 2012. Lemmatisation as a tagging task. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Jeju Island, Korea, pages 368–372. http://www.aclweb.org/anthology/P12-2072. Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850 https://arxiv.org/abs/1308.0850. Jan Hajiˇc, Massimiliano Ciaramita, Richard Johansson, Daisuke Kawahara, Maria Ant`onia Mart´ı, Llu´ıs M`arquez, Adam Meyers, Joakim Nivre, Sebastian Pad´o, Jan ˇStˇep´anek, Pavel Straˇn´ak, Mihai Surdeanu, Nianwen Xue, and Yi Zhang. 2009. The conll-2009 shared task: Syntactic and semantic dependencies in multiple languages. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL 2009): Shared Task. Association for Computational Linguistics, Boulder, Colorado, pages 1–18. http://www.aclweb.org/anthology/W09-1201. Dag TT Haug and Marius Jøhndal. 2008. Creating a parallel treebank of the old indo-european bible translations. In Proceedings of the Second Workshop on Language Technology for Cultural Heritage Data (LaTeCH 2008). pages 27–34. Mitesh Khapra, Anup Kulkarni, Saurabh Sohoney, and Pushpak Bhattacharyya. 2010. All words domain adapted wsd: Finding a middle ground between supervision and unsupervision. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Uppsala, Sweden, pages 1532–1541. http://www.aclweb.org/anthology/P10-1155. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 https://arxiv.org/abs/1412.6980. Kimmo Koskenniemi. 1984. A general computational model for word-form recognition and production. In Proceedings of the 10th International Conference on Computational Linguistics and 22nd Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Stanford, California, USA, pages 178–181. https://doi.org/10.3115/980491.980529. Wang Ling, Chris Dyer, Alan W Black, Isabel Trancoso, Ramon Fermandez, Silvio Amir, Luis Marujo, and Tiago Luis. 2015. Finding function in form: 1490 Compositional character models for open vocabulary word representation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 1520– 1530. http://aclweb.org/anthology/D15-1176. Aki Loponen and Kalervo J¨arvelin. 2010. A dictionary and corpus independent statistical lemmatizer for information retrieval in low resource languages. In Multilingual and Multimodal Information Access Evaluation, Springer, pages 3–14. https://doi.org/10.1007/978-3-642-15998-53. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems. Curran Associates Inc., USA, NIPS’13, pages 3111–3119. http://dl.acm.org/citation.cfm?id=2999792.2999959. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013b. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Atlanta, Georgia, pages 746–751. http://www.aclweb.org/anthology/N13-1090. Thomas M¨uller, Ryan Cotterell, Alexander Fraser, and Hinrich Sch¨utze. 2015. Joint lemmatization and morphological tagging with lemming. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 2268–2274. http://aclweb.org/anthology/D15-1272. Garrett Nicolai and Grzegorz Kondrak. 2016. Leveraging inflection tables for stemming and lemmatization. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1138– 1147. http://www.aclweb.org/anthology/P16-1108. Joakim Nivre, ˇZeljko Agi´c, Lars Ahrenberg, Maria Jesus Aranzabe, Masayuki Asahara, Aitziber Atutxa, Miguel Ballesteros, John Bauer, Kepa Bengoetxea, Riyaz Ahmad Bhat, Eckhard Bick, Cristina Bosco, Gosse Bouma, Sam Bowman, Marie Candito, G¨uls¸en Cebirolu Eryiit, Giuseppe G. A. Celano, Fabricio Chalub, Jinho Choi, C¸ ar C¸ ¨oltekin, Miriam Connor, Elizabeth Davidson, Marie-Catherine de Marneffe, Valeria de Paiva, Arantza Diaz de Ilarraza, Kaja Dobrovoljc, Timothy Dozat, Kira Droganova, Puneet Dwivedi, Marhaba Eli, Tomaˇz Erjavec, Rich´ard Farkas, Jennifer Foster, Cl´audia Freitas, Katar´ına Gajdoˇsov´a, Daniel Galbraith, Marcos Garcia, Filip Ginter, Iakes Goenaga, Koldo Gojenola, Memduh G¨okrmak, Yoav Goldberg, Xavier G´omez Guinovart, Berta Gonz´ales Saavedra, Matias Grioni, Normunds Gr¯uz¯itis, Bruno Guillaume, Nizar Habash, Jan Hajiˇc, Linh H`a M, Dag Haug, Barbora Hladk´a, Petter Hohle, Radu Ion, Elena Irimia, Anders Johannsen, Fredrik Jørgensen, H¨uner Kas¸kara, Hiroshi Kanayama, Jenna Kanerva, Natalia Kotsyba, Simon Krek, Veronika Laippala, Phng Lˆe Hng, Alessandro Lenci, Nikola Ljubeˇsi´c, Olga Lyashevskaya, Teresa Lynn, Aibek Makazhanov, Christopher Manning, C˘at˘alina M˘ar˘anduc, David Mareˇcek, H´ector Mart´ınez Alonso, Andr´e Martins, Jan Maˇsek, Yuji Matsumoto, Ryan McDonald, Anna Missil¨a, Verginica Mititelu, Yusuke Miyao, Simonetta Montemagni, Amir More, Shunsuke Mori, Bohdan Moskalevskyi, Kadri Muischnek, Nina Mustafina, Kaili M¨u¨urisep, Lng Nguyn Th, Huyn Nguyn Th Minh, Vitaly Nikolaev, Hanna Nurmi, Stina Ojala, Petya Osenova, Lilja Øvrelid, Elena Pascual, Marco Passarotti, Cenel-Augusto Perez, Guy Perrier, Slav Petrov, Jussi Piitulainen, Barbara Plank, Martin Popel, Lauma Pretkalnia, Prokopis Prokopidis, Tiina Puolakainen, Sampo Pyysalo, Alexandre Rademaker, Loganathan Ramasamy, Livy Real, Laura Rituma, Rudolf Rosa, Shadi Saleh, Manuela Sanguinetti, Baiba Saul¯ite, Sebastian Schuster, Djam´e Seddah, Wolfgang Seeker, Mojgan Seraji, Lena Shakurova, Mo Shen, Dmitry Sichinava, Natalia Silveira, Maria Simi, Radu Simionescu, Katalin Simk´o, M´aria ˇSimkov´a, Kiril Simov, Aaron Smith, Alane Suhr, Umut Sulubacak, Zsolt Sz´ant´o, Dima Taji, Takaaki Tanaka, Reut Tsarfaty, Francis Tyers, Sumire Uematsu, Larraitz Uria, Gertjan van Noord, Viktor Varga, Veronika Vincze, Jonathan North Washington, Zdenˇek ˇZabokrtsk´y, Amir Zeldes, Daniel Zeman, and Hanzhi Zhu. 2017. Universal dependencies 2.0. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics, Charles University. http://hdl.handle.net/11234/11983. Snigdha Paul, Nisheeth Joshi, and Iti Mathur. 2013. Development of a hindi lemmatizer. International Journal of Computational Linguistics and Natural Language Processing 2(5):380–384. https://arxiv.org/abs/1305.6211. Jo¨el Plisson, Nada Lavrac, Dunja Mladenic, et al. 2004. A rule based approach to word lemmatization. Proceedings of IS-2004 pages 83–86. Kristina Toutanova and Colin Cherry. 2009. A global model for joint lemmatization and partof-speech prediction. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. Association for Computational Linguistics, Suntec, Singapore, pages 486–494. http://aclweb.org/anthology/P/P09/P09-1055.pdf. Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 https://arxiv.org/abs/1212.5701. 1491
2017
136
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1492–1502 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1137 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1492–1502 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1137 Learning to Create and Reuse Words in Open-Vocabulary Neural Language Modeling Kazuya Kawakami♠ Chris Dyer♣ Phil Blunsom♠♣ ♠Department of Computer Science, University of Oxford, Oxford, UK ♣DeepMind, London, UK {kazuya.kawakami,phil.blunsom}@cs.ox.ac.uk,[email protected] Abstract Fixed-vocabulary language models fail to account for one of the most characteristic statistical facts of natural language: the frequent creation and reuse of new word types. Although character-level language models offer a partial solution in that they can create word types not attested in the training corpus, they do not capture the “bursty” distribution of such words. In this paper, we augment a hierarchical LSTM language model that generates sequences of word tokens character by character with a caching mechanism that learns to reuse previously generated words. To validate our model we construct a new open-vocabulary language modeling corpus (the Multilingual Wikipedia Corpus; MWC) from comparable Wikipedia articles in 7 typologically diverse languages and demonstrate the effectiveness of our model across this range of languages. 1 Introduction Language modeling is an important problem in natural language processing with many practical applications (translation, speech recognition, spelling autocorrection, etc.). Recent advances in neural networks provide strong representational power to language models with distributed representations and unbounded dependencies based on recurrent networks (RNNs). However, most language models operate by generating words by sampling from a closed vocabulary which is composed of the most frequent words in a corpus. Rare tokens are typically replaced by a special token, called the unknown word token, ⟨UNK⟩. Although fixedvocabulary language models have some important practical applications and are appealing models for study, they fail to capture two empirical facts about the distribution of words in natural languages. First, vocabularies keep growing as the number of documents in a corpus grows: new words are constantly being created (Heaps, 1978). Second, rare and newly created words often occur in “bursts”, i.e., once a new or rare word has been used once in a document, it is often repeated (Church and Gale, 1995; Church, 2000). The open-vocabulary problem can be solved by dispensing with word-level models in favor of models that predict sentences as sequences of characters (Sutskever et al., 2011; Chung et al., 2017). Character-based models are quite successful at learning what (new) word forms look like (e.g., they learn a language’s orthographic conventions that tell us that sustinated is a plausible English word and bzoxqir is not) and, when based on models that learn long-range dependencies such as RNNs, they can also be good models of how words fit together to form sentences. However, existing character-sequence models have no explicit mechanism for modeling the fact that once a rare word is used, it is likely to be used again. In this paper, we propose an extension to character-level language models that enables them to reuse previously generated tokens (§2). Our starting point is a hierarchical LSTM that has been previously used for modeling sentences (word by word) in a conversation (Sordoni et al., 2015), except here we model words (character by character) in a sentence. To this model, we add a caching mechanism similar to recent proposals for caching that have been advocated for closed-vocabulary models (Merity et al., 2017; Grave et al., 2017). As word tokens are generated, they are placed in an LRU cache, and, at each time step the model decides whether to copy a previously generated word from the cache or to generate it from scratch, character by character. The decision of whether 1492 to use the cache or not is a latent variable that is marginalised during learning and inference. In summary, our model has three properties: it creates new words, it accounts for their burstiness using a cache, and, being based on LSTM s over word representations, it can model long range dependencies. To evaluate our model, we perform ablation experiments with variants of our model without the cache or hierarchical structure. In addition to standard English data sets (PTB and WikiText-2), we introduce a new multilingual data set: the Multilingual Wikipedia Corpus (MWC), which is constructed from comparable articles from Wikipedia in 7 typologically diverse languages (§3) and show the effectiveness of our model in all languages (§4). By looking at the posterior probabilities of the generation mechanism (language model vs. cache) on held-out data, we find that the cache is used to generate “bursty” word types such as proper names, while numbers and generic content words are generated preferentially from the language model (§5). 2 Model In this section, we describe our hierarchical character language model with a word cache. As is typical for RNN language models, our model uses the chain rule to decompose the problem into incremental predictions of the next word conditioned on the history: p(w) = |w| Y t=1 p(wt | w<t). We make two modifications to the traditional RNN language model, which we describe in turn. First, we begin with a cache-less model we call the hierarchical character language model (HCLM; §2.1) which generates words as a sequence of characters and constructs a “word embedding” by encoding a character sequence with an LSTM (Ling et al., 2015). However, like conventional closedvocabulary, word-based models, it is based on an LSTM that conditions on words represented by fixed-length vectors.1 The HCLM has no mechanism to reuse words that it has previously generated, so new forms will 1The HCLM is an adaptation of the hierarchical recurrent encoder-decoder of Sordoni et al. (2015) which was used to model dialog as a sequence of actions sentences which are themselves sequences of words. The original model was proposed to compose words into query sequences but we use it to compose characters into word sequences. only be repeated with very low probability. However, since the HCLM is not merely generating sentences as a sequence of characters, but also segmenting them into words, we may add a wordbased cache to which we add words keyed by the hidden state being used to generate them (§2.2). This cache mechanism is similar to the model proposed by Merity et al. (2017). Notation. Our model assigns probabilities to sequences of words w = w1, . . . , w|w|, where |w| is the length, and where each word wi is represented by a sequence of characters ci = ci,1, . . . , ci,|ci| of length |ci|. 2.1 Hierarchical Character-level Language Model (HCLM) This hierarchical model satisfies our linguistic intuition that written language has (at least) two different units, characters and words. The HCLM consists of four components, three LSTMs (Hochreiter and Schmidhuber, 1997): a character encoder, a word-level context encoder, and a character decoder (denoted LSTMenc, LSTMctx, and LSTMdec, respectively), and a softmax output layer over the character vocabulary. Fig. 1 illustrates an unrolled HCLM. Suppose the model reads word wt−1 and predicts the next word wt. First, the model reads the character sequence representing the word wt−1 = ct−1,1, . . . , ct−1,|ct−1| where |ct−1| is the length of the word generated at time t −1 in characters. Each character is represented as a vector vct−1,1, . . . , vct−1,|ct−1| and fed into the encoder LSTMenc . The final hidden state of the encoder LSTMenc is used as the vector representation of the previously generated word wt−1, henc t = LSTMenc(vct−1,1, . . . , vct−1,|ct|). Then all the vector representations of words (vw1, . . . , vw|w|) are processed with a context LSTMctx . Each of the hidden states of the context LSTMctx are considered representations of the history of the word sequence. hctx t = LSTMctx(henc 1 , . . . , henc t ) Finally, the initial state of the decoder LSTM is set to be hctx t and the decoder LSTM reads a vector representation of the start symbol v⟨S⟩and generates the next word wt+1 character by character. To predict the j-th character in wt, the decoder 1493 P o k é m o n </s> The Pokémon Company International (formerly Pokémon USA Inc.), a subsidiary of Japan's Pokémon Co., oversees all Pokémon licensing … C o m p a n y </s> …. ( f o r m e r l y </s> Cache rt . . . . . . . . <s> P o k é m o n P o k é m o n </s> henc t hctx t wt−1 wt p(Pok´emon) = λtplm(Pok´emon) + (1 −λt)pptr(Pok´emon) ut λt pptr(Pok´emon) plm(Pok´emon) Figure 1: Description of Hierarchical Character Language Model with Cache. LSTM reads vector representations of the previous characters in the word, conditioned on the context vector hctx t and a start symbol. hdec t,j = LSTMdec(vct,1, . . . , vct,j−1, hctx t , v⟨S⟩). The character generation probability is defined by a softmax layer for the corresponding hidden representation of the decoder LSTM . p(ct,j | w<t, ct,<j) = softmax(Wdechdec t,j + bdec) Thus, a word generation probability from HCLM is defined as follows. plm(wt | w<t) = |ct| Y j=1 p(ct,j | w<t, ct,<j) 2.2 Continuous cache component The cache component is an external memory structure which store K elements of recent history. Similarly to the memory structure used in Grave et al. (2017), a word is added to a key-value memory after each generation of wt. The key at position i ∈[1, K] is ki and its value mi. The memory slot is chosen as follows: if the wt exists already in the memory, its key is updated (discussed below). Otherwise, if the memory is not full, an empty slot is chosen or the least recently used slot is overwritten. When writing a new word to memory, the key is the RNN representation that was used to generate the word (ht) and the value is the word itself (wt). In the case when the word already exists in the cache at some position i, the ki is updated to be the arithmetic average of ht and the existing ki. To define the copy probability from the cache at time t, a distribution over copy sites is defined using the attention mechanism of Bahdanau et al. (2015). To do so, we construct a query vector (rt) from the RNN’s current hidden state ht, rt = tanh(Wqht + bq), then, for each element i of the cache, a ‘copy score,’ ui,t is computed, ui,t = vT tanh(Wuki + rt). Finally, the probability of generating a word via the copying mechanism is: pmem(i | ht) = softmaxi(ut) pptr(wt | ht) = pmem(i | ht)[mi = wt], where [mi = wt] is 1 if the ith value in memory is wt and 0 otherwise. Since pmem defines a distribution of slots in the cache, pptr translates it into word space. 2.3 Character-level Neural Cache Language Model The word probability p(wt | w<t) is defined as a mixture of the following two probabilities. The first 1494 one is a language model probability, plm(wt | w<t) and the other is pointer probability , pptr(wt | w<t). The final probability p(wt | w<t) is λtplm(wt | w<t) + (1 −λt)pptr(wt | w<t), where λt is computed by a multi-layer perceptron with two non-linear transformations using ht as its input, followed by a transformation by the logistic sigmoid function: γt = MLP(ht), λt = 1 1 −e−γt . We remark that Grave et al. (2017) use a clever trick to estimate the probability, λt of drawing from the LM by augmenting their (closed) vocabulary with a special symbol indicating that a copy should be used. This enables word types that are highly predictive in context to compete with the probability of a copy event. However, since we are working with an open vocabulary, this strategy is unavailable in our model, so we use the MLP formulation. 2.4 Training objective The model parameters as well as the character projection parameters are jointly trained by maximizing the following log likelihood of the observed characters in the training corpus, L = − X log p(wt | w<t). 3 Datasets We evaluate our model on a range of datasets, employing preexisting benchmarks for comparison to previous published results, and a new multilingual corpus which specifically tests our model’s performance across a range of typological settings. 3.1 Penn Tree Bank (PTB) We evaluate our model on the Penn Tree Bank. For fair comparison with previous works, we followed the standard preprocessing method used by Mikolov et al. (2010). In the standard preprocessing, tokenization is applied, words are lowercased, and punctuation is removed. Also, less frequent words are replaced by unknown an token (UNK),2 constraining the word vocabulary size to be 10k. Because of this preprocessing, we do not expect this dataset to benefit from the modeling innovations we have introduced in the paper. Fig.1 summarizes the corpus statistics. 2When the unknown token is used in character-level model, it is treated as if it were a normal word (i.e. UNK is the Train Dev Test Character types 50 50 48 Word types 10000 6022 6049 OOV rate 0.00% 0.00% Word tokens 0.9M 0.1M 0.1M Characters 5.1M 0.4M 0.4M Table 1: PTB Corpus Statistics. 3.2 WikiText-2 Merity et al. (2017) proposed the WikiText-2 Corpus as a new benchmark dataset.3 They pointed out that the preprocessed PTB is unrealistic for real language use in terms of word distribution. Since the vocabulary size is fixed to 10k, the word frequency does not exhibit a long tail. The wikiText-2 corpus is constructed from 720 articles. They provided two versions. The version for word level language modeling was preprocessed by discarding infrequent words. But, for character-level models, they provided raw documents without any removal of word or character types or lowercasing, but with tokenization. We make one change to this corpus: since Wikipedia articles make extensive use of characters from other languages; we replaced character types that occur fewer than 25 times were replaced with a dummy character (this plays the role of the ⟨UNK⟩token in the character vocabulary). Tab. 2 summarizes the corpus statistics. Train Dev Test Character types 255 128 138 Word types 76137 19813 21109 OOV rate 4.79% 5.87% Word tokens 2.1M 0.2M 0.2M Characters 10.9M 1.1M 1.3M Table 2: WikiText-2 Corpus Statistics. 3.3 Multilingual Wikipedia Corpus (MWC) Languages differ in what word formation processes they have. For character-level modeling it is therefore interesting to compare a model’s performance sequence U, N, and K). This is somewhat surprising modeling choice, but it has become conventional (Chung et al., 2017). 3http://metamind.io/research/thewikitext-long-term-dependency-languagemodeling-dataset/ 1495 across languages. Since there is at present no standard multilingual language modeling dataset, we created a new dataset, the Multilingual Wikipedia Corpus (MWC), a corpus of the same Wikipedia articles in 7 languages which manifest a range of morphological typologies. The MWC contains English (EN), French (FR), Spanish (ES), German (DE), Russian (RU), Czech (CS), and Finnish (FI). To attempt to control for topic divergences across languages, every language’s data consists of the same articles. Although these are only comparable (rather than true translations), this ensures that the corpus has a stable topic profile across languages.4 Construction & Preprocessing We constructed the MWC similarly to the WikiText-2 corpus. Articles were selected from Wikipedia in the 7 target languages. To keep the topic distribution to be approximately the same across the corpora, we extracted articles about entities which explained in all the languages. We extracted articles which exist in all languages and each consist of more than 1,000 words, for a total of 797 articles. These crosslingual articles are, of course, not usually translations, but they tend to be comparable. This filtering ensures that the topic profile in each language is similar. Each language corpus is approximately the same size as the WikiText-2 corpus. Wikipedia markup was removed with WikiExtractor,5 to obtain plain text. We used the same thresholds to remove rare characters in the WikiText-2 corpus. No tokenization or other normalization (e.g., lowercasing) was done. Statistics After the preprocessing described above, we randomly sampled 360 articles. The articles are split into 300, 30, 30 sets and the first 300 articles are used for training and the rest are used for dev and test respectively. Table 3 summarizes the corpus statistics. Additionally, we show in Fig. 2 the distribution of frequencies of OOV word types (relative to the training set) in the dev+test portions of the corpus, which shows a power-law distribution, which is expected for the burstiness of rare words found in prior work. Curves look similar for all languages (see Appendix A). 4The Multilingual Wikipedia Corpus (MWC) is available for download from http://k-kawakami.com/ research/mwc 5https://github.com/attardi/ wikiextractor Figure 2: Histogram of OOV word frequencies in the dev+test part of the MWC Corpus (EN). 4 Experiments We now turn to a series of experiments to show the value of our hierarchical character-level cache language model. For each dataset we trained the model with LSTM units. To compare our results with a strong baseline, we also train a model without the cache. Model Configuration For HCLM and HCLM with cache models, We used 600 dimensions for the character embeddings and the LSTMs have 600 hidden units for all the experiments. This keeps the model complexity to be approximately the same as previous works which used an LSTM with 1000 dimension. Our baseline LSTM have 1000 dimensions for embeddings and reccurence weights. For the cache model, we used cache size 100 in every experiment. All the parameters including character projection parameters are randomly sampled from uniform distribution from −0.08 to 0.08. The initial hidden and memory state of LSTMenc and LSTMctx are initialized with zero. Mini-batches of size 25 are used for PTB experiments and 10 for WikiText-2, due to memory limitations. The sequences were truncated with 35 words. Then the words are decomposed to characters and fed into the model. A Dropout rate of 0.5 was used for all but the recurrent connections. Learning The models were trained with the Adam update rule (Kingma and Ba, 2015) with a learning rate of 0.002. The maximum norm of the gradients was clipped at 10. Evaluation We evaluated our models with bitsper-character (bpc) a standard evaluation metric 1496 Char. Types Word Types OOV rate Tokens Characters Train Valid Test Train Valid Test Valid Test Train Valid Test Train Valid Test EN 307 160 157 193808 38826 35093 6.60% 5.46% 2.5M 0.2M 0.2M 15.6M 1.5M 1.3M FR 272 141 155 166354 34991 38323 6.70% 6.96% 2.0M 0.2M 0.2M 12.4M 1.3M 1.6M DE 298 162 183 238703 40848 41962 7.07% 7.01% 1.9M 0.2M 0.2M 13.6M 1.2M 1.3M ES 307 164 176 160574 31358 34999 6.61% 7.35% 1.8M 0.2M 0.2M 11.0M 1.0M 1.3M CS 238 128 144 167886 23959 29638 5.06% 6.44% 0.9M 0.1M 0.1M 6.1M 0.4M 0.5M FI 246 123 135 190595 32899 31109 8.33% 7.39% 0.7M 0.1M 0.1M 6.4M 0.7M 0.6M RU 273 184 196 236834 46663 44772 7.76% 7.20% 1.3M 0.1M 0.1M 9.3M 1.0M 0.9M Table 3: Summary of MWC Corpus. for character-level language models. Following the definition in Graves (2013), bits-per-character is the average value of −log2 p(wt | w<t) over the whole test set, bpc = −1 |c| log2 p(w), where |c| is the length of the corpus in characters. 4.1 Results PTB Tab. 4 summarizes results on the PTB dataset.6 Our baseline HCLM model achieved 1.276 bpc which is better performance than the LSTM with Zoneout regularization (Krueger et al., 2017). And HCLM with cache outperformed the baseline model with 1.247 bpc and achieved competitive results with state-of-the-art models with regularization on recurrence weights, which was not used in our experiments. Expressed in terms of per-word perplexity (i.e., rather than normalizing by the length of the corpus in characters, we normalize by words and exponentiate), the test perplexity on HCLM with cache is 94.79. The performance of the unregularized 2-layer LSTM with 1000 hidden units on wordlevel PTB dataset is 114.5 and the same model with dropout achieved 87.0. Considering the fact that our character-level models are dealing with an open vocabulary without unknown tokens, the results are promising. WikiText-2 Tab. 5 summarizes results on the WikiText-2 dataset. Our baseline, LSTM achieved 1.803 bpc and HCLM model achieved 1.670 bpc. The HCLM with cache outperformed the baseline models and achieved 1.500 bpc. The word level perplexity is 227.30, which is quite high compared to the reported word level baseline result 100.9 6Models designated with a * have more layers and more parameters. Method Dev Test CW-RNN (Koutnik et al., 2014) 1.46 HF-MRNN (Mikolov et al., 2012) 1.41 MI-RNN (Wu et al., 2016) 1.39 ME n-gram (Mikolov et al., 2012) 1.37 RBN (Cooijmans et al., 2017) 1.281 1.32 Recurrent Dropout (Semeniuta et al., 2016) 1.338 1.301 Zoneout (Krueger et al., 2017) 1.362 1.297 HM-LSTM (Chung et al., 2017) 1.27 HyperNetwork (Ha et al., 2017) 1.296 1.265 LayerNorm HyperNetwork (Ha et al., 2017) 1.281 1.250 2-LayerNorm HyperLSTM (Ha et al., 2017)* - 1.219 2-Layer with New Cell (Zoph and Le, 2016)* - 1.214 LSTM (Our Implementation) 1.369 1.331 HCLM 1.308 1.276 HCLM with Cache 1.266 1.247 Table 4: Results on PTB Corpus (bits-percharacter). HCLM augmented with a cache obtains the best results among models which have approximately the same numbers of parameter as single layer LSTM with 1,000 hidden units. with LSTM with ZoneOut and Variational Dropout regularization (Merity et al., 2017). However, the character-level model is dealing with 76,136 types in training set and 5.87% OOV rate where the word level models only use 33,278 types without OOV in test set. The improvement rate over the HCLM baseline is 10.2% which is much higher than the improvement rate obtained in the PTB experiment. Method Dev Test LSTM 1.758 1.803 HCLM 1.625 1.670 HCLM with Cache 1.480 1.500 Table 5: Results on WikiText-2 Corpus . Multilingual Wikipedia Corpus (MWC) Tab. 6 summarizes results on the MWC dataset. Similarly to WikiText-2 experiments, LSTM 1497 is strong baseline. We observe that the cache mechanism improve performance in every languages. In English, HCLM with cache achieved 1.538 bpc where the baseline is 1.622 bpc. It is 5.2% improvement. For other languages, the improvement rates were 2.7%, 3.2%, 3.7%, 2.5%, 4.7%, 2.7% in FR, DE, ES, CS, FI, RU respectively. The best improvement rate was obtained in Finnish. 5 Analysis In this section, we analyse the behavior of proposed model qualitatively. To analyse the model, we compute the following posterior probability which tell whether the model used the cache given a word and its preceding context. Let zt be a random variable that says whether to use the cache or the LM to generate the word at time t. We would like to know, given the text w, whether the cache was used at time t. This can be computed as follows: p(zt | w) = p(zt, wt | ht, cachet) p(wt | ht, cachet) = (1 −λt)pptr(wt | ht, cachet) p(wt | ht, cachet) , where cachet is the state of the cache at time t. We report the average posterior probability of cache generation excluding the first occurrence of w, p(z | w). Tab. 7 shows the words in the WikiText-2 test set that occur more than 1 time that are most/least likely to be generated from cache and character language model (words that occur only one time cannot be cache-generated). We see that the model uses the cache for proper nouns: Lesnar, Gore, etc., as well as very frequent words which always stored somewhere in the cache such as single-token punctuation, the, and of. In contrast, the model uses the language model to generate numbers (which tend not to be repeated): 300, 770 and basic content words: sounds, however, unable, etc. This pattern is similar to the pattern found in empirical distribution of frequencies of rare words observed in prior wors (Church and Gale, 1995; Church, 2000), which suggests our model is learning to use the cache to account for bursts of rare words. To look more closely at rare words, we also investigate how the model handles words that occurred between 2 and 100 times in the test set, but fewer than 5 times in the training set. Fig. 3 is a scatter plot of p(z | w) vs the empirical frequency in the test set. As expected, more frequently repeated words types are increasingly likely to be drawn from the cache, but less frequent words show a range of cache generation probabilities. Figure 3: Average p(z | w) of OOV words in test set vs. term frequency in the test set for words not obsered in the training set. The model prefers to copy frequently reused words from cache component, which tend to names (upper right) while character level generation is used for infrequent open class words (bottom left). Tab. 8 shows word types with the highest and lowest average p(z | w) that occur fewer than 5 times in the training corpus. The pattern here is similar to the unfiltered list: proper nouns are extremely likely to have been cache-generated, whereas numbers and generic (albeit infrequent) content words are less likely to have been. 6 Discussion Our results show that the HCLM outperforms a basic LSTM. With the addition of the caching mechanism, the HCLM becomes consistently more powerful than both the baseline HCLM and the LSTM. This is true even on the PTB, which has no rare or OOV words in its test set (because of preprocessing), by caching repetitive common words such as the. In true open-vocabulary settings (i.e., WikiText-2 and MWC), the improvements are much more pronounced, as expected. Computational complexity. In comparison with word-level models, our model has to read and generate each word character by character, and it also requires a softmax over the entire memory at every time step. However, the computation is still linear in terms of the length of the sequence, and the softmax over the memory cells and character 1498 EN FR DE ES CS FI RU dev test dev test dev test dev test dev test dev test dev test LSTM 1.793 1.736 1.669 1.621 1.780 1.754 1.733 1.667 2.191 2.155 1.943 1.913 1.942 1.932 HCLM 1.683 1.622 1.553 1.508 1.666 1.641 1.617 1.555 2.070 2.035 1.832 1.796 1.832 1.810 HCLM with Cache 1.591 1.538 1.499 1.467 1.605 1.588 1.548 1.498 2.010 1.984 1.754 1.711 1.777 1.761 Table 6: Results on MWC Corpus (bits-per-character). Word p(z | w) ↓ Word p(z | w) ↑ . 0.997 300 0.000 Lesnar 0.991 act 0.001 the 0.988 however 0.002 NY 0.985 770 0.003 Gore 0.977 put 0.003 Bintulu 0.976 sounds 0.004 Nerva 0.976 instead 0.005 , 0.974 440 0.005 UB 0.972 similar 0.006 Nero 0.967 27 0.009 Osbert 0.967 help 0.009 Kershaw 0.962 few 0.010 Manila 0.962 110 0.010 Boulter 0.958 Jersey 0.011 Stevens 0.956 even 0.011 Rifenburg 0.952 y 0.012 Arjona 0.952 though 0.012 of 0.945 becoming 0.013 31B 0.941 An 0.013 Olympics 0.941 unable 0.014 Table 7: Word types with the highest/lowest average posterior probability of having been copied from the cache while generating the test set. The probability tells whether the model used the cache given a word and its context. Left: Cache is used for frequent words (the, of) and proper nouns (Lesnar, Gore). Right: Character level generation is used for basic words and numbers. vocabulary are much smaller than word-level vocabulary. On the other hand, since the recurrent states are updated once per character (rather than per word) in our model, the distribution of operations is quite different. Depending on the hardware support for these operations (repeated updates of recurrent states vs. softmaxes), our model may be faster or slower. However, our model will have fewer parameters than a word-based model since most of the parameters in such models live in the word projection layers, and we use LSTMs in place of these. Non-English languages. For non-English languages, the pattern is largely similar for nonEnglish languages. This is not surprising since morphological processes may generate forms that are related to existing forms, but these still have Word p(z | w) ↓ Word p(z | w) ↑ Gore 0.977 770 0.003 Nero 0.967 246 0.037 Osbert 0.967 Lo 0.074 Kershaw 0.962 Pitcher 0.142 31B 0.941 Poets 0.143 Kirby 0.935 popes 0.143 CR 0.926 Yap 0.143 SM 0.924 Piso 0.143 impedance 0.923 consul 0.143 Blockbuster 0.900 heavyweight 0.143 Superfamily 0.900 cheeks 0.154 Amos 0.900 loser 0.164 Steiner 0.897 amphibian 0.167 Bacon 0.893 squads 0.167 filters 0.889 los 0.167 Lim 0.889 Keenan 0.167 Selfridge 0.875 sculptors 0.167 filter 0.875 Gen. 0.167 Lockport 0.867 Kipling 0.167 Germaniawerft 0.857 Tabasco 0.167 Table 8: Same as Table 7, except filtering for word types that occur fewer than 5 times in the training set. The cache component is used as expected even on rare words: proper nouns are extremely likely to have been cache-generated, whereas numbers and generic content words are less likely to have been; this indicates both the effectiveness of the prior at determining whether to use the cache and the burstiness of proper nouns. slight variations. Thus, they must be generated by the language model component (rather than from the cache). Still, the cache demonstrates consistent value in these languages. Finally, our analysis of the cache on English does show that it is being used to model word reuse, particularly of proper names, but also of frequent words. While empirical analysis of rare word distributions predicts that names would be reused, the fact that cache is used to model frequent words suggests that effective models of language should have a means to generate common words as units. Finally, our model disfavors copying numbers from the cache, even when they are available. This suggests that it has learnt that numbers are not generally repeated (in contrast to names). 1499 7 Related Work Caching language models were proposed to account for burstiness by Kuhn and De Mori (1990), and recently, this idea has been incorporated to augment neural language models with a caching mechanism (Merity et al., 2017; Grave et al., 2017). Open vocabulary neural language models have been widely explored (Sutskever et al., 2011; Mikolov et al., 2012; Graves, 2013, inter alia). Attempts to make them more aware of word-level dynamics, using models similar to our hierarchical formulation, have also been proposed (Chung et al., 2017). The only models that are open vocabulary language modeling together with a caching mechanism are the nonparametric Bayesian language models based on hierarchical Pitman–Yor processes which generate a lexicon of word types using a character model, and then generate a text using these (Teh, 2006; Goldwater et al., 2009; Chahuneau et al., 2013). These, however, do not use distributed representations on RNNs to capture long-range dependencies. 8 Conclusion In this paper, we proposed a character-level language model with an adaptive cache which selectively assign word probability from past history or character-level decoding. And we empirically show that our model efficiently model the word sequences and achieved better perplexity in every standard dataset. To further validate the performance of our model on different languages, we collected multilingual wikipedia corpus for 7 typologically diverse languages. We also show that our model performs better than character-level models by modeling burstiness of words in local context. The model proposed in this paper assumes the observation of word segmentation. Thus, the model is not directly applicable to languages, such as Chinese and Japanese, where word segments are not explicitly observable. We will investigate a model which can marginalise word segmentation as latent variables in the future work. Acknowledgements We thank the three anonymous reviewers for their valuable feedback. The third author acknowledges the support of the EPSRC and nvidia Corporation. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. ICLR. Victor Chahuneau, Noah A. Smith, and Chris Dyer. 2013. Knowledge-rich morphological priors for bayesian language models. In Proc. NAACL. Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. 2017. Hierarchical multiscale recurrent neural networks. In Proc. ICLR. Kenneth W Church. 2000. Empirical estimates of adaptation: the chance of two Noriegas is closer to p/2 than p2. In Proc. COLING. Kenneth W Church and William A Gale. 1995. Poisson mixtures. Natural Language Engineering 1(2):163– 190. Tim Cooijmans, Nicolas Ballas, César Laurent, Ça˘glar Gülçehre, and Aaron Courville. 2017. Recurrent batch normalization. In Proc. ICLR. Sharon Goldwater, Thomas L Griffiths, and Mark Johnson. 2009. A Bayesian framework for word segmentation: Exploring the effects of context. Cognition 112(1):21–54. Edouard Grave, Armand Joulin, and Nicolas Usunier. 2017. Improving neural language models with a continuous cache. In Proc. ICLR. Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850 . David Ha, Andrew Dai, and Quoc V Le. 2017. Hypernetworks. In Proc. ICLR. Harold Stanley Heaps. 1978. Information retrieval: Computational and theoretical aspects. Academic Press, Inc. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735– 1780. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proc. ICLR. Jan Koutnik, Klaus Greff, Faustino Gomez, and Juergen Schmidhuber. 2014. A clockwork RNN. In Proc. ICML. David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, Aaron Courville, et al. 2017. Zoneout: Regularizing rnns by randomly preserving hidden activations. In Proc. ICLR. Roland Kuhn and Renato De Mori. 1990. A cachebased natural language model for speech recognition. IEEE transactions on pattern analysis and machine intelligence 12(6):570–583. 1500 Wang Ling, Tiago Luís, Luís Marujo, Ramón Fernandez Astudillo, Silvio Amir, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015. Finding function in form: Compositional character models for open vocabulary word representation. In Proc. EMNLP. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017. Pointer sentinel mixture models. In Proc. ICLR. Tomas Mikolov, Martin Karafiát, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Proc. Interspeech. Tomáš Mikolov, Ilya Sutskever, Anoop Deoras, HaiSon Le, Stefan Kombrink, and Jan Cernocky. 2012. Subword language modeling with neural networks. preprint (http://www. fit. vutbr. cz/imikolov/rnnlm/char. pdf) . Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. 2016. Recurrent dropout without memory loss. In Proc. COLING. Alessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob Grue Simonsen, and JianYun Nie. 2015. A hierarchical recurrent encoderdecoder for generative context-aware query suggestion. In Proc. CIKM. Ilya Sutskever, James Martens, and Geoffrey E Hinton. 2011. Generating text with recurrent neural networks. In Proc. ICML. Yee Whye Teh. 2006. A hierarchical Bayesian language model based on Pitman-Yor processes. In Proc. ACL. Yuhuai Wu, Saizheng Zhang, Ying Zhang, Yoshua Bengio, and Ruslan R Salakhutdinov. 2016. On multiplicative integration with recurrent neural networks. In Proc. NIPS. Barret Zoph and Quoc V Le. 2016. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578 . A Corpus Statistics Fig. 4 show distribution of frequencies of OOV word types in 6 languages. 1501 FR DE ES CS FI RU Figure 4: Histogram of OOV word frequencies in MWC Corpus in different languages. 1502
2017
137
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1503–1513 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1138 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1503–1513 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1138 Bandit Structured Prediction for Neural Sequence-to-Sequence Learning Julia Kreutzer∗and Artem Sokolov∗and Stefan Riezler†,∗ ∗Computational Linguistics & †IWR, Heidelberg University, Germany {kreutzer,sokolov,riezler}@cl.uni-heidelberg.de Abstract Bandit structured prediction describes a stochastic optimization framework where learning is performed from partial feedback. This feedback is received in the form of a task loss evaluation to a predicted output structure, without having access to gold standard structures. We advance this framework by lifting linear bandit learning to neural sequence-to-sequence learning problems using attention-based recurrent neural networks. Furthermore, we show how to incorporate control variates into our learning algorithms for variance reduction and improved generalization. We present an evaluation on a neural machine translation task that shows improvements of up to 5.89 BLEU points for domain adaptation from simulated bandit feedback. 1 Introduction Many NLP tasks involve learning to predict a structured output such as a sequence, a tree or a graph. Sequence-to-sequence learning with neural networks has recently become a popular approach that allows tackling structured prediction as a mapping problem between variable-length sequences, e.g., from foreign language sentences into target-language sentences (Sutskever et al., 2014), or from natural language input sentences into linearized versions of syntactic (Vinyals et al., 2015) or semantic parses (Jia and Liang, 2016). A known bottleneck in structured prediction is the requirement of large amounts of gold-standard structures for supervised learning of model parameters, especially for data-hungry neural network models. Sokolov et al. (2016a,b) presented a framework for stochastic structured prediction under bandit feedback that alleviates the need for labeled output structures in learning: Following an online learning protocol, on each iteration the learner receives an input, predicts an output structure, and receives partial feedback in form of a task loss evaluation of the predicted structure.1 They “banditize” several objective functions for linear structured predictions, and evaluate the resulting algorithms with simulated bandit feedback on various NLP tasks. We show how to lift linear structured prediction under bandit feedback to non-linear models for sequence-to-sequence learning with attentionbased recurrent neural networks (Bahdanau et al., 2015). Our framework is applicable to sequenceto-sequence learning from various types of weak feedback. For example, extracting learning signals from the execution of structured outputs against databases has been established in the communities of semantic parsing and grounded language learning since more than a decade (Zettlemoyer and Collins, 2005; Clarke et al., 2010; Liang et al., 2011). Our work can build the basis for neural semantic parsing from weak feedback. In this paper, we focus on the application of machine translation via neural sequence-to-sequence learning. The standard procedure of training neural machine translation (NMT) models is to compare their output to human-generated translations and to infer model updates from this comparison. However, the creation of reference translations or post-edits requires professional expertise of users. Our framework allows NMT models to learn from feedback that is weaker than human references or post-edits. One could imagine a scenario of personalized machine translation where translations have to be adapted to the user’s specific purpose and domain. The feedback required by our methods can be provided by laymen users or can even 1The name “bandit feedback” is inherited from the problem of maximizing the reward for a sequence of pulls of arms of so-called “one-armed bandit” slot machines. 1503 be implicit, e.g., inferred from user interactions with the translated content on a web page. Starting from the work of Sokolov et al. (2016a,b), we lift their objectives to neural sequence-to-sequence learning. We evaluate the resulting algorithms on the task of French-toEnglish translation domain adaptation where a seed model trained on Europarl data is adapted to the NewsCommentary and the TED talks domain with simulated weak feedback. By learning from this feedback, we find 4.08 BLEU points improvements on NewsCommentary, and 5.89 BLEU points improvement on TED. Furthermore, we show how control variates can be integrated in our algorithms, yielding faster learning and improved generalization in our experiments. 2 Related Work NMT models are most commonly trained under a word-level maximum likelihood objective. Even though this objective has successfully been applied to many sequence-to-sequence learning tasks, the resulting models suffer from exposure bias, since they learn to generate output words based on the history of given reference words, not on their own predictions. Ranzato et al. (2016) apply techniques from reinforcement learning (Sutton and Barto, 1998; Sutton et al., 2000) and imitation learning (Schaal, 1999; Ross et al., 2011; Daum´e et al., 2009) to learn from feedback to the model’s own predictions. Furthermore, they address the mismatch between word-level loss and sequence-level evaluation metric by using a mixture of the REINFORCE (Williams, 1992) algorithm and the standard maximum likelihood training to directly optimize a sequence-level loss. Similarly, Shen et al. (2016) lift minimum risk training (Och, 2003; Smith and Eisner, 2006; Gimpel and Smith, 2010; Yuille and He, 2012; He and Deng, 2012) from linear models for machine translation to NMT. These works are closely related to ours in that they use the technique of score function gradient estimators (Fu, 2006; Schulman et al., 2015) for stochastic learning. However, the learning environment of Shen et al. (2016) is different from ours in that they approximate the true gradient of the risk objective in a full information setting by sampling a subset of translations and computing the expectation over their rewards. In our bandit setting, feedback to only a single sample per sentence is available, making the learning problem much harder. The approach by Ranzato et al. (2016) approximates the expectation with single samples, but still requires reference translations which are unavailable in the bandit setting. To our knowledge, the only work on training NMT from weak feedback is the work by He et al. (2016). They propose a dual-learning mechanism where two translation models are jointly trained on monolingual data. The feedback in this case is a reward signal from language models and a reconstruction error. This is attractive because the feedback can automatically be generated from monolingual data and does not require any human references. However, we are interested in using estimates of human feedback on translation quality to directly adapt the model to the users’ needs. Our approach follows most closely the work of Sokolov et al. (2016a,b). They introduce bandit learning objectives for structured prediction and apply them to various NLP tasks, including machine translation with linear models. Their approach can be seen as an instantiation of reinforcement learning to one-state Markov decision processes under linear policy models. In this paper, we transfer their algorithms to nonlinear sequence-to-sequence learning. Sokolov et al. (2016a) showed applications of linear bandit learning to tasks such as multiclass-classification, OCR, and chunking, where learning can be done from scratch. We focus on lifting their linear machine translation experiments to the more complex NMT that requires a warm start for training. This is done by training a seed model on one domain and adapting it to a new domain based on bandit feedback only. For this task we build on the work of Freitag and Al-Onaizan (2016), who investigate strategies to find the best of both worlds: models that adapt well to the new domain without deteriorating on the old domain. In contrast to previous approaches to domain adaptation for NMT, we do not require in-domain parallel data, but consult direct feedback to the translations generated for the new domain. 3 Neural Machine Translation Neural models for machine translation are based on a sequence-to-sequence learning architecture consisting of an encoder and a decoder (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015). An encoder Recurrent Neural Network (RNN) reads in the source sentence and a decoder 1504 RNN generates the target sentence conditioned on the encoded source. The input to the encoder is a sequence of vectors x = (x1, . . . , xTx) representing a sequence of source words of length Tx. In the approach of Sutskever et al. (2014), they are encoded into a single vector c = q({h1, . . . , hTx}), where ht = f(xt, ht−1) is the hidden state of the RNN at time t. Several choices are possible for the non-linear functions f and q: Here we are using a Gated Recurrent Unit (GRU) (Chung et al., 2014) for f, and for q an attention mechanism that defines the context vector as a weighted sum over encoder hidden states (Bahdanau et al., 2015; Luong et al., 2015a). The decoder RNN predicts the next target word yt at time t given the context vector c and the previous target words y<t = {y1, . . . , yt−1} from a probability distribution over the target vocabulary V . This distribution is the result of a softmax transformation of the decoder outputs o = {o1, . . . , oTy}, such that pθ(yt = wi|y<t, c) = exp(owi) PV v=1 exp(owv) . The probability of a full sequence of outputs y = (y1, . . . , yTy) of length Ty is defined as the product of the conditional word probabilities: pθ(y|x) = Ty Y t=1 pθ(yt|y<t, c). Since this encoder-decoder architecture is fully differentiable, it can be trained with gradient descent methods. Given a parallel training set of S source sentences and their reference translations D = {(x(s), y(s))}S s=1, we can define a wordlevel Maximum Likelihood Estimation (MLE) objective, which aims to find the parameters ˆθMLE = arg max θ LMLE(θ) of the following loss function: LMLE(θ) = S X s=1 log pθ(y(s)|x(s)) = S X s=1 Ty X t=1 log pθ(yt|x(s), y(s) <t). This loss function is non-convex for the case of neural networks. Clever initialization strategies, Algorithm 1 Neural Bandit Structured Prediction Input: Sequence of learning rates γk Output: Optimal parameters ˆθ 1: Initialize θ0 2: for k = 0, . . . , K do 3: Observe xk 4: Sample ˜yk ∼pθ(y|xk) 5: Obtain feedback ∆(˜yk) 6: θk+1 = θk −γk sk 7: Choose a solution ˆθ from the list {θ0, . . . , θK} adaptive learning rates and momentum techniques are required to find good local maxima and to speed up convergence (Sutskever et al., 2013). Another trick of the trade is to ensemble several models with different random initializations to improve over single models (Luong et al., 2015a). At test time, we face a search problem to find the sequence of target words with the highest probability. Beam search reduces the search error in comparison to greedy search, but also exponentially increases decoding time. 4 Neural Bandit Structured Prediction Algorithm 1 is an adaptation of the Bandit Structured Prediction algorithm of Sokolov et al. (2016b) to neural models: For K rounds, a model with parameters θ receives an input, samples an output structure, and receives user feedback. Based on this feedback, a stochastic gradient sk is computed and the model parameters are updated. As a post-optimization step, a solution ˆθ is selected from the iterates. This is done with onlineto-batch conversion by choosing the model with optimal performance on held-out data. The core of the algorithm is the sampling: if the model distribution is very peaked, the model exploits, i.e., it presents the most probable outputs to the user. If the distribution is close to uniform, the model explores, i.e., it presents random outputs to the user. The balance between exploitation and exploration is crucial to the learning process: in the beginning the model is rather uninformed and needs to explore in order to find outputs with high reward, while in the end it ideally converges towards a peaked distribution that exactly fits the user’s needs. Pre-training the model, i.e. setting θ0 wisely, ensures a reasonable exploitationexploration trade-off. This online learning algorithm can be applied 1505 to any objective L provided the stochastic gradients sk are unbiased estimators of the true gradient of the objective, i.e., we require ∇L = E[sk]. In the following, we will present objectives from Sokolov et al. (2016b) transferred to neural models, and explain how they can be enhanced by control variates. 4.1 Expected Loss (EL) Minimization The first objective is defined as the expectation of a task loss ∆(˜y), e.g. −BLEU(˜y), over all input and output structures: LEL(θ) =Ep(x) pθ(˜y|x) [∆(˜y)] . (1) In the case of full-information learning where reference outputs are available, we could evaluate all possible outputs against the reference to obtain an exact estimation of the loss function. However, this is not feasible in our setting since we only receive partial feedback for a single output structure per input. Instead, we use stochastic approximation to optimize this loss. The stochastic gradient for this objective is computed as follows: sEL k =∆(˜y)∂log pθ(˜y|xk) ∂θ . (2) Objective (1) is known from minimum risk training (Och, 2003) and has been lifted to NMT by Shen et al. (2016) – but not for learning from weak feedback. Equation (2) is an instance of the score function gradient estimator (Fu, 2006) where ∇log pθ(˜y|xk) (3) denotes the score function. We give an algorithm to sample structures from an encoder-decoder model in Algorithm 2. It corresponds to the algorithm presented by Shen et al. (2016) with the difference that it samples single structures, does not assume a reference structure, and additionally returns the sample probabilities. A similar objective has also been used in the REINFORCE algorithm (Williams, 1992) which has been adapted to NMT by Ranzato et al. (2016). 4.2 Pairwise Preference Ranking (PR) The previous objective requires numerical feedback as an estimate of translation quality. Alternatively, we can learn from pairwise preference judgments that are formalized in preference ranking objectives. Let P(x) = {⟨yi, yj⟩|yi, yj ∈Y(x)} denote the set of output pairs for an input x, and let ∆(⟨yi, yj⟩) : P(x) →[0, 1] denote a task loss function that specifies a dispreference of yi over yj. In our experimental simulations we use two types of pairwise feedback. Firstly, continuous pairwise feedback2 is computed as ∆(⟨yi, yj⟩) = ∆(yj) −∆(yi), and secondly, binary feedback is computed as ∆(⟨yi, yj⟩) = ( 1 if ∆(yj) > ∆(yi), 0 otherwise. Analogously to the sequence-level sampling for linear models (Sokolov et al., 2016b), we define the following probabilities for word-level sampling: p+ θ (˜yt = wi|x, ˆy<t) = exp(owi) PV v=1 exp(owv) , p− θ (˜yt = wj|x, ˆy<t) = exp(−owj) PV v=1 exp(−owv) . The effect of the negation within the softmax is that the two distributions p+ θ and p− θ rank the next candidate target words ˜yt (given the same history, here the greedy output ˆy<t) in opposite order. Globally normalized models as in the linear case, or LSTM-CRFs (Huang et al., 2015) for the non-linear case would allow sampling full structures such that the ranking over full structures is reversed. But in the case of locally normalized RNNs we retrieve only locally reversed-rank samples. Since we want the model to learn to rank ˜yi over ˜yj, we would have to sample ˜yi word-byword from p+ θ and ˜yj from p− θ . However, sampling all words of ˜yj from p− θ leads to translations that are neither fluent nor source-related, so we propose to randomly choose one position of ˜yj where the next word is sampled from p− θ and sample the remaining words from p+ θ . We found that this method produces suitable negative samples, which are only slightly perturbed and still relatively fluent and source-related. A detailed algorithm is given in Algorithm 3. In the same manner as for linear models, we define the probability of a pair of sequences as pθ(⟨˜yi, ˜yj⟩|x) = p+ θ (˜yi|x) × p− θ (˜yj|x). 2Note that our definition of continuous feedback is slightly different from the one proposed in Sokolov et al. (2016b) where updates are only made for misrankings. 1506 Algorithm 2 Sampling Structures Input: Model θ, target sequence length limit Ty Output: Sequence of words w = (w1, . . . , wT y) and log-probability p 1: w0 = START, p0 = 0 2: w = (w0) 3: for t ←1 . . . Ty do 4: wt ∼pθ(w|x, w<t) 5: pt = pt−1 + log pθ(w|x, w<t) 6: w = (w1, . . . , wt−1, wt) 7: end for 8: Return w and pT Note that with the word-based sampling scheme described above, the sequence ˜yj also includes words sampled from p+ θ . The pairwise preference ranking objective expresses an expectation over losses over these pairs: LPR(θ) =Ep(x) pθ(⟨˜yi,˜yj⟩|x) [∆(⟨˜yi, ˜yj⟩)] . (4) The stochastic gradient for this objective is sPR k =∆(⟨˜yi, ˜yj⟩) (5) × ∂log p+ θ (˜yi|xk) ∂θ + ∂log p− θ (˜yj|xk) ∂θ  . This training procedure resembles well-known approaches for noise contrastive estimation (Gutmann and Hyv¨arinen, 2010) with negative sampling that are commonly used for neural language modeling (Collobert et al., 2011; Mnih and Teh, 2012; Mikolov et al., 2013). In these approaches, negative samples are drawn from a non-parametric noise distribution, whereas we draw them from the perturbed model distribution. 4.3 Control Variates The stochastic gradients defined in equations (2) and (5) can be used in stochastic gradient descent optimization (Bottou et al., 2016) where the full gradient is approximated using a minibatch or a single example in each update. The stochastic choice, in our case on inputs and outputs, introduces noise that leads to slower convergence and degrades performance. In the following, we explain how antithetic and additive control variate techniques from Monte Carlo simulation (Ross, 2013) can be used to remedy these problems. The idea of additive control variates is to augment a random variable X whose expectation is Algorithm 3 Sampling Pairs of Structures Input: Model θ, target sequence length limit Ty Output: Pair of sequences ⟨w, w′⟩and their logprobability p 1: p0 = 0 2: w, w′, ˆw = (START) 3: i ∼U(1, T) 4: for t ←1 . . . Ty do 5: ˆwt = arg maxw∈V p+ θ (w|x, ˆw<t) 6: wt ∼p+ θ (w|x, ˆw<t) 7: pt = pt−1 + log p+ θ (wt|x, ˆw<t) 8: if i = t then 9: w′ t ∼p− θ (w|x, ˆw<t) 10: pt = pt + log p− θ (w′ t|x, ˆw<t) 11: else 12: w′ t ∼p+ θ (w|x, ˆw<t) 13: pt = pt + log p+ θ (w′ t|x, ˆw<t) 14: end if 15: w = (w1, . . . , wt−1, wt) 16: w′ = (w′ 1, . . . , w′ t−1, w′ t) 17: ˆw = ( ˆw1, . . . , ˆwt−1, ˆwt) 18: end for 19: Return ⟨w, w′⟩and pT sought, by another random variable Y to which X is highly correlated. Y is then called the control variate. Let ¯Y furthermore denote its expectation. Then the following quantity X−ˆc Y +ˆc ¯Y is an unbiased estimator of E[X]. In our case, the random variable of interest is the noisy gradient X = sk from Equation (2). The variance reduction effect of control variates can be seen by computing the variance of this quantity: Var(X −ˆc Y ) = Var(X) + ˆc2Var(Y ) (6) −2ˆc Cov(X, Y ). Choosing a control variate such that Cov(X, Y ) is positive and high enough, the variance of the gradient estimate will be reduced. An example is the average reward baseline known from reinforcement learning (Williams, 1992), yielding Yk = ∇log pθ(˜y|xk) 1 k k X j=1 ∆(˜yj). (7) The optimal scalar ˆc can be derived easily by taking the derivative of (6), leading to ˆc = Cov(X,Y ) Var(X) . This technique has been applied to using the score 1507 function (Equation (3)) as control variate in Ranganath et al. (2014), yielding the following control variate: Y k = ∇log pθ(˜y|xk). (8) Note that for both types of control variates, (7) and (8), the expectation ¯Y is zero, simplifying the implementation. However, the optimal scalar ˆc has to be estimated for every entry of the gradient separately for the score function control variate. We will explore both types of control variates for the stochastic gradient (2) in our experiments. A further effect of control variates is to reduce the magnitude of the gradient, the more so the more the stochastic gradient and the control variate covary. For L-Lipschitz continuous functions, a reduced gradient norm directly leads to a bound on L which appears in the algorithmic stability bounds of Hardt et al. (2016). This effect of improved generalization by control variates is empirically validated in our experiments. A similar variance reduction effect can be obtained by antithetic control variates. Here E[X] is approximated by the estimator X1+X2 2 whose variance is Var X1 + X2 2  = 1 4 Var(X1) (9) + Var(X2) + 2Cov(X1, X2)  . Choosing the variates X1 and X2 such that Cov(X1, X2) is negative will reduce the variance of the gradient estimate. Under certain assumptions, the stochastic gradient (5) of the pairwise preference objective can be interpreted as an antithetic estimator of the score function (3). The antithetic variates in this case would be X1 = ∇log p+ θ (˜yi|xk), (10) X2 = ∇log p− θ (˜yj|xk), where an antithetic dependence of X2 on X1 can be achieved by construction of p+ θ and p− θ (see Capriotti (2008) which is loosely related to our approach). Similar to control variates, antithetic variates have the effect of shrinking the gradient norm, the more so the more the variates are antithetically correlated, leading to possible improvements in algorithmic stability (Hardt et al., 2016). 5 Experiments In the following, we present an experimental evaluation of the learning objectives presented above Domain Version Train Valid. Test Europarl v.5 1.6M 2k 2k News Commentary WMT07 40k 1k 2k TED TED2013 153k 2k 2k Table 1: Number of parallel sentences for training, validation and test sets for French-to-English domain adaptation. on machine translation domain adaptation. We compare how the presented neural bandit learning objectives perform in comparison to linear models, then discuss the handling of unknown words and eventually investigate the impact of techniques for variance reduction. 5.1 Setup Data. We perform domain adaptation from Europarl (EP) to News Commentary (NC) and TED talks (TED) for translations from French to English. Table 1 provides details about the datasets. For data pre-processing we follow the procedure of Sokolov et al. (2016a,b) using cdec tools for filtering, lowercasing and tokenization. The challenge for the bandit learner is to adapt from the EP domain to NC or TED with weak feedback only. NMT Models. We choose a standard encoderdecoder architecture with single-layer GRU RNNs with 800 hidden units, a word embedding size of 300 and tanh activations. The encoder consists of a bidirectional RNN, where the hidden states of backward and forward RNN are concatenated. The decoder uses the attention mechanism proposed by Bahdanau et al. (2015).3 Source and target vocabularies contain the 30k most frequent words of the respective parts of the training corpus. We limit the maximum sentence length to 50. Dropout (Srivastava et al., 2014) with a probability of 0.5 is applied to the network in several places: on the embedded inputs, before the output layer, and on the initial state of the decoder RNN. The gradient is clipped when its norms exceeds 1.0 to prevent exploding gradients and stabilize learning (Pascanu et al., 2013). All models are implemented and trained with the sequence learning framework Neural Monkey (Libovick`y et al., 3We do not use beam search nor ensembling, although we are aware that higher performance is almost guaranteed with these techniques. Our goal is to show relative differences between different models, so a simple setup is sufficient for the purpose of our experiments. 1508 2016; Bojar et al., 2016).4 They are trained with a minibatch size of 20, fitting onto single 8GB GPU machines. The training dataset is shuffled before each epoch. Baselines. The out-of-domain baseline is trained on the EP training set with standard MLE. For both NC and TED domains, we train two fullinformation in-domain baselines: The first indomain baseline is trained on the relatively small in-domain training data. The second in-domain baseline starts from the out-of-domain model and is further trained on the in-domain data. All baselines are trained with MLE and Adam (Kingma and Ba, 2014) (α = 1 × 10−4, β1 = 0.9, β2 = 0.999) until their performance stops increasing on respective held-out validation sets. The gap between the performance of the out-ofdomain model and the in-domain models defines the range of possible improvements for bandit learning. All models are evaluated with Neural Monkey’s mteval. For statistical significance tests we used Approximate Randomization testing (Noreen, 1989). Bandit Learning. Bandit learning starts with the parameters of the out-of-domain baseline. The bandit models are expected to improve over the out-of-domain baseline by receiving feedback from the new domain, but at most to reach the indomain baseline since the feedback is weak. The models are trained with Adam on in-domain data for at most 20 epochs. Adam’s step-size parameter α was tuned on the validation set and was found to perform best when set to 1 × 10−5 for non-pairwise, 1 × 10−6 for pairwise objectives on NC, 1 × 10−7 for pairwise objectives on TED. The best model parameters, selected with early stopping on the in-domain validation set, are evaluated on the held-out in-domain test set. In the spirit of Freitag and Al-Onaizan (2016) they are additionally evaluated on the out-of-domain test set to investigate how much knowledge of the old domain the models lose while adapting to the new domain. Bandit learning experiments are repeated two times, with different random seeds, and mean BLEU scores with standard deviation are reported. 4The Neural Monkey fork https://github. com/juliakreutzer/bandit-neuralmonkey contains bandit learning objectives and the configuration files for our experiments. Feedback Simulation. Weak feedback is simulated from the target side of the parallel corpus, but references are never revealed to the learner. Sokolov et al. (2016a,b) used a smoothed version of per-sentence BLEU for simulating the weak feedback for generated translations from the comparison with reference translations. Here, we use gGLEU instead, which Wu et al. (2016) recently introduced for learning from sentence-level reward signals correlating well with corpus BLEU. This metric is closely related to BLEU, but does not have a brevity penalty and considers the recall of matching n-grams. It is defined as the minimum of recall and precision over the total n-grams up to a certain n. Hence, for our experiments ∆(˜y) = −gGLEU(˜y, y), where ˜y is a sample translation and y is the reference translation. Unknown words. One drawback of NMT models is their limitation to a fixed source- and target vocabulary. In a domain adaptation setting, this limitation has a critical impact to the translation quality. The larger the distance between old and new domain, the more words in the new domain are unknown to the models trained on the old domain (represented with a special UNK token). We consider two strategies for this problem for our experiments: 1. UNK-Replace: Jean et al. (2015) and Luong et al. (2015b) replace generated UNK tokens with aligned source words or their lexical translations in a post-processing step. Freitag and Al-Onaizan (2016) and Hashimoto et al. (2016) demonstrated that this technique is beneficial for NMT domain adaptation. 2. BPE: Sennrich et al. (2016) introduce byte pair encoding (BPE) for word segmentation to build translation models on sub-word units. Rare words are decomposed into subword units, while the most frequent words remain single vocabulary items. For UNK-Replace we use fast align to generate lexical translations on the EP training data. When an UNK token is generated, we look up the attention weights and find the source token that receives most attention in this step. If possible, we replace the UNK token by its lexical translation. If it is not included in the lexical translations, it is replaced by the source token. The main benefit of this technique is that it deals well 1509 Algorithm Train data Iter. EP NC TED MLE EP 12.3M 31.44 26.98 23.48 MLE-UNK 31.82 28.00 24.59 MLE-BPE 12.0M 31.81 27.20 24.35 Table 2: Out-of-domain NMT baseline results (BLEU) on in- and out-of-domain test sets trained only on EP data. with unknown named entities that are just passed through from source to target. However, since it is a non-differentiable post-processing step, the NMT model cannot directly be trained for this behavior. Therefore we also train sub-word level NMT with BPE. We apply 29,800 merge operations to obtain a vocabulary of 29,908 sub-words. The procedure for training these models is exactly the same as for the word-based models. The advantage of this method is that the model is in principle able to generate any word composing it from sub-word units. However, training sequences become longer and candidate translations are sampled on a sub-word level, which introduces the risk of sampling nonsense words. Control variates. We implement the average baseline control variate as defined in Equation 7, which results in keeping an running average over previous losses. Intuitively, absolute gGLEU feedback is turned into relative feedback that reflects the current state of the model. The sign of the update is switched when the gGLEU for the current sample is worse than the average gGLEU, so the model makes a step away from it, while in the case of absolute feedback it would still make a small step towards it. In addition, we implement the score function control variate with a running estimate ˆck = 1 k Pk j=1 Cov(sj,∇log pθ(˜yj|xj)) Var(sj) . 5.2 Results In the following, we discuss the results of the experimental evaluation of the models described above. The out-of-domain baseline results are given in Table 2, those for the in-domain baselines in 3. The results for bandit learning on NC and TED are reported in Table 4. For bandit learning we give mean improvements over the respective out-of-domain baselines in the Diff.-columns. Baselines. The NMT out-of-domain baselines, reported in Table 2, perform comparable to the linear baseline from Sokolov et al. (2016a,b) on Algorithm Train data Iter. EP NC MLE NC 978k 13.67 22.32 MLE-UNK 13.83 22.56 MLE-BPE 1.0M 14.09 23.01 MLE EP→NC 160k 26.66 31.91 MLE-UNK 27.19 33.19 MLE-BPE 160k 27.14 33.31 Algorithm Train data Iter. EP TED MLE TED 2.2M 14.16 32.71 MLE-UNK 15.15 33.16 MLE-BPE 3.0M 14.18 32.81 MLE EP→TED 460k 23.88 33.65 MLE-UNK 24.64 35.57 MLE-BPE 2.2M 23.39 36.23 Table 3: In-domain NMT baselines results (BLEU) on in- and out-of-domain test sets. The EP→NC is first trained on EP, then fine-tuned on NC. The EP→TED is first trained on EP, then finetuned on TED. NC, but the in-domain EP→NC (Table 3) baselines outperform the linear baseline by more than 3 BLEU points. Continuing training of a pre-trained out-of-domain model on a small amount of in domain data is very hence effective, whilst the performance of the models solely trained on small indomain data is highly dependent on the size of this training data set. For TED, the in-domain dataset is almost four times as big as the NC training set, so the in-domain baselines perform better. This effect was previously observed by Luong and Manning (2015) and Freitag and Al-Onaizan (2016). Bandit Learning. The NMT bandit models that optimize the EL objective yield generally a much higher improvement over the out-of-domain models than the corresponding linear models: As listed in Table 4, we find improvements of between 2.33 and 2.89 BLEU points on the NC domain, and between 4.18 and 5.18 BLEU points on the TED domain. In contrast, the linear models with sparse features and hypergraph re-decoding achieved a maximum improvement of 0.82 BLEU points on NC. Optimization of the PR objective shows improvements of up to 1.79 BLEU points on NC (compared to 0.6 BLEU points for linear models), but no significant improvement on TED. The biggest impact of this variance reduction tech1510 Algorithm Iter. EP NC Diff. EL 317k 30.36±0.20 29.34±0.29 2.36 EL-UNK* 317k 30.73±0.20 30.33±0.42 2.33 EL-UNK** 349k 30.67±0.04 30.45±0.27 2.45 EL-BPE 543k 30.09±0.31 30.09±0.01 2.89 PR-UNK** (bin) 22k 30.76±0.03 29.40±0.02 1.40 PR-BPE (bin) 14k 31.02±0.09 28.92±0.03 1.72 PR-UNK** (cont) 12k 30.81±0.02 29.43±0.02 1.43 PR-BPE (cont) 8k 30.91±0.01 28.99±0.00 1.79 SF-EL-UNK** 713k 29.97±0.09 30.61±0.05 2.61 SF-EL-BPE 375k 30.46±0.10 30.20±0.11 3.00 BL-EL-UNK** 531k 30.19±0.37 31.47±0.09 3.47 BL-EL-BPE 755k 29.88±0.07 31.28±0.24 4.08 (a) Domain adaptation from EP to NC. Algorithm Iter. EP TED Diff. EL 976k 29.34±0.42 27.66±0.03 4.18 EL-UNK* 976k 29.68±0.29 29.44±0.06 4.85 EL-UNK** 1.1M 29.62±0.15 29.77±0.15 5.18 EL-BPE 831k 30.03±0.43 28.54±0.04 4.18 PR-UNK** (bin) 14k 31.84±0.01 24.85±0.00 0.26 PR-BPE (bin) 69k 31.77±0.01 24.55±0.01 0.20 PR-UNK** (cont) 9k 31.85±0.02 24.85±0.01 0.26 PR-BPE (cont) 55k 31.79±0.02 24.59±0.01 0.24 SF-EL-UNK** 658k 30.18±0.15 29.12±0.10 4.53 SF-EL-BPE 590k 30.32±0.26 28.51±0.18 4.16 BL-EL-UNK** 644k 29.91±0.03 30.44±0.13 5.85 BL-EL-BPE 742k 29.84±0.61 30.24±0.46 5.89 (b) Domain adaptation from EP to TED. Table 4: Bandit NMT results (BLEU) on EP, NC and TED test sets. UNK* models involve UNK replacement only during testing, UNK** include UNK replacement already during training. For PR, either binary (bin) or continuous feedback (cont) was used. Control variates: average reward baseline (BL) and score function (SF). Results are averaged over two independent runs and standard deviation is given in subscripts. Improvements over respective out-of-domain models are given in the Diff.-columns. nique is a considerable speedup of training speed of 1 to 2 orders of magnitude compared to EL. A beneficial side-effect of NMT learning from weak feedback is that the knowledge from the out-domain training is not simply “overwritten”. This happens to full-information in-domain tuning where more than 4 BLEU points are lost in an evaluation on the out-domain data. On the contrary, the bandit learning models still achieve high results on the original domain. This is useful for conservative domain adaptation, where the performance of the models in the old domain is still relevant. Unknown words. By handling unknown words with UNK-Replace or BPEs, we find consistent improvements over the plain word-based models for all baselines and bandit learning models. We observe that the models with UNK replacement essentially benefit from passing through source tokens, and only marginally from lexical translations. Bandit learning models take particular advantage of UNK replacement when it is included already during training. The sub-word models achieve the overall highest improvement over the baselines, although sometimes generating nonsense words. Control variates. Applying the score function control variate to EL optimization does not largely change learning speed or BLEU results. However, the average reward control variate leads to improvements of around 1 BLEU over the EL optimization without variance reduction on both domains. 6 Conclusion In this paper, we showed how to lift structured prediction under bandit feedback from linear models to non-linear sequence-to-sequence learning using recurrent neural networks with attention. We introduced algorithms to train these models under numerical feedback to single output structures or under preference rankings over pairs of structures. In our experimental evaluation on the task of neural machine translation domain adaptation, we found relative improvements of up to 5.89 BLEU points over out-of-domain seed models, outperforming also linear bandit models. Furthermore, we argued that pairwise ranking under bandit feedback can be interpreted as a use of antithetic variates, and we showed how to include average reward and score function baselines as control variates for improved training speed and generalization. In future work, we would like to apply the presented non-linear bandit learners to other structured prediction tasks. Acknowledgments This research was supported in part by the German research foundation (DFG), and in part by a research cooperation grant with the Amazon Development Center Germany. 1511 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. San Diego, CA. Ondˇrej Bojar, Roman Sudarikov, Tom Kocmi, Jindˇrich Helcl, and Ondˇrej Cıfka. 2016. UFAL submissions to the IWSLT 2016 MT track. In IWSLT. Seattle, WA. Leon Bottou, Frank E. Curtis, and Jorge Nocedal. 2016. Optimization methods for large-scale machine learning. eprint arXiv:1606.04838v1 . Luca Capriotti. 2008. Reducing the variance of likelihood ratio greeks in Monte Carlo. In WCS. Miami, FL. Kyunghyun Cho, Bart van Merri¨enboer, C¸ alar G¨ulc¸ehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In EMNLP. Doha, Qatar. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. eprint arXiv:1412.3555 . James Clarke, Dan Goldwasser, Wing-Wei Chang, and Dan Roth. 2010. Driving semantic parsing from the world’s response. In CoNLL. Portland, OR. Ronan Collobert, Jason Weston, Leon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. JMLR 12:2461–2505. Hal Daum´e, John Langford, and Daniel Marcu. 2009. Search-based structured prediction. Machine learning 75(3):297–325. Markus Freitag and Yaser Al-Onaizan. 2016. Fast domain adaptation for neural machine translation. eprint arXiv:1612.06897 . Michael C. Fu. 2006. Gradient estimation. In S.G. Henderson and B.L. Nelson, editors, Handbook in Operations Research and Management Science, volume 13, pages 575–616. Kevin Gimpel and Noah A. Smith. 2010. Softmaxmargin training for structured log-linear models. Technical Report CMU-LTI-10-008, Carnegie Mellon University. Michael Gutmann and Aapo Hyv¨arinen. 2010. Noisecontrastive estimation: A new estimation principle for unnormalized statistical models. In AISTATS. Sardinia, Italy. Moritz Hardt, Ben Recht, and Yoram Singer. 2016. Train faster, generalize better: Stability of stochastic gradient descent. In ICML. New York, NY. Kazuma Hashimoto, Akiko Eriguchi, and Yoshimasa Tsuruoka. 2016. Domain adaptation and attentionbased unknown word replacement in chinese-tojapanese neural machine translation. In COLING Workshop on Asian Translation. Osaka, Japan. Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tieyan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. In NIPS. Barcelona, Spain. Xiaodong He and Li Deng. 2012. Maximum expected BLEU training of phrase and lexicon translation models. In ACL. Jeju Island, Korea. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. eprint arXiv:1508.01991 . S´ebastien Jean, Orhan Firat, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. Montreal neural machine translation systems for WMT’15. In WMT. Lisbon, Portugal. Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In ACL. Berlin, Germany. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. eprint arXiv:1412.6980 . Percy Liang, Michael I. Jordan, and Dan Klein. 2011. Learning dependency-based compositional semantics. In ACL-HLT. Portland, OR. Jindˇrich Libovick`y, Jindˇrich Helcl, Marek Tlust`y, Pavel Pecina, and Ondˇrej Bojar. 2016. CUNI system for WMT16 automatic post-editing and multimodal translation tasks. In WMT. Berlin, Germany. Minh-Thang Luong and Christopher D. Manning. 2015. Stanford neural machine translation systems for spoken language domains. In IWSLT. Da Nang, Vietnam. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015a. Effective approaches to attentionbased neural machine translation. In EMNLP. Lisbon, Portugal. Thang Luong, Ilya Sutskever, Quoc Le, Oriol Vinyals, and Wojciech Zaremba. 2015b. Addressing the rare word problem in neural machine translation. In ACL. Beijing, China. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In NIPS. Lake Tahoe, CA. Andriy Mnih and Yee Whye Teh. 2012. A fast and simple algorithm for training neural probabilistic language models. In ICML. Edinburgh, Scotland. Eric W. Noreen. 1989. Computer Intensive Methods for Testing Hypotheses. An Introduction. Wiley. 1512 Franz J. Och. 2003. Minimum error rate training in statistical machine translation. In HLT-NAACL. Edmonton, Canada. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In ICML. Atlanta, GA. Rajesh Ranganath, Sean Gerrish, and David M. Blei. 2014. Black box variational inference. In AISTATS. Reykjavik, Iceland. MarcAurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In ICLR. San Juan, Puerto Rico. Sheldon M. Ross. 2013. Simulation. Elsevier, fifth edition. St´ephane Ross, Geoffrey J Gordon, and Drew Bagnell. 2011. A reduction of imitation learning and structured prediction to no-regret online learning. In AISTATS. Ft. Lauderdale, FL. Stefan Schaal. 1999. Is imitation learning the route to humanoid robots? Trends in Cognitive Sciences 3(6):233–242. John Schulman, Nicolas Heess, Theophane Weber, and Pieter Abbeel. 2015. Gradient estimation using stochastic computation graphs. In NIPS. Montreal, Canada. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In ACL. Berlin, Germany. Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum risk training for neural machine translation. In ACL. Berlin, Germany. David A. Smith and Jason Eisner. 2006. Minimum risk annealing for training log-linear models. In COLING-ACL. Sydney, Australia. Artem Sokolov, Julia Kreutzer, Christopher Lo, and Stefan Riezler. 2016a. Learning structured predictors from bandit feedback for interactive NLP. In ACL. Berlin, Germany. Artem Sokolov, Julia Kreutzer, Christopher Lo, and Stefan Riezler. 2016b. Stochastic structured prediction under bandit feedback. In NIPS. Barcelona, Spain. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. JMLR 15(1):1929–1958. Ilya Sutskever, James Martens, George E. Dahl, and Geoffrey E. Hinton. 2013. On the importance of initialization and momentum in deep learning. In ICML. Atlanta, GA. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In NIPS. Montreal, Canada. Richard S. Sutton and Andrew G. Barto. 1998. Reinforcement Learning. An Introduction. The MIT Press. Richard S. Sutton, David McAllester, Satinder Singh, and Yishay Mansour. 2000. Policy gradient methods for reinforcement learning with function approximation. In NIPS. Vancouver, Canada. Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In NIPS. Montreal, Canada. Ronald J. Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine Learning 20:229–256. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. eprint arXiv:1609.08144 . Alan Yuille and Xuming He. 2012. Probabilistic models of vision and max-margin methods. Frontiers of Electrical and Electronic Engineering 7(1):94–106. Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In UAI. Edinburgh, Scotland. 1513
2017
138
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1514–1523 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1139 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1514–1523 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1139 Prior Knowledge Integration for Neural Machine Translation using Posterior Regularization Jiacheng Zhang†, Yang Liu†‡∗, Huanbo Luan†, Jingfang Xu# and Maosong Sun†‡ †State Key Laboratory of Intelligent Technology and Systems Tsinghua National Laboratory for Information Science and Technology Department of Computer Science and Technology, Tsinghua University, Beijing, China ‡Jiangsu Collaborative Innovation Center for Language Competence, Jiangsu, China #Sogou Inc., Beijing, China Abstract Although neural machine translation has made significant progress recently, how to integrate multiple overlapping, arbitrary prior knowledge sources remains a challenge. In this work, we propose to use posterior regularization to provide a general framework for integrating prior knowledge into neural machine translation. We represent prior knowledge sources as features in a log-linear model, which guides the learning process of the neural translation model. Experiments on ChineseEnglish translation show that our approach leads to significant improvements. 1 Introduction The past several years have witnessed the rapid development of neural machine translation (NMT) (Sutskever et al., 2014; Bahdanau et al., 2015), which aims to model the translation process using neural networks in an end-to-end manner. With the capability of capturing long-distance dependencies due to the gating (Hochreiter and Schmidhuber, 1997; Cho et al., 2014) and attention (Bahdanau et al., 2015) mechanisms, NMT has shown remarkable superiority over conventional statistical machine translation (SMT) across a variety of natural languages (Junczys-Dowmunt et al., 2016). Despite the apparent success, NMT still suffers from one significant drawback: it is difficult to integrate prior knowledge into neural networks. On one hand, neural networks use continuous realvalued vectors to represent all language structures involved in the translation process. While these vector representations prove to be capable of capturing translation regularities implicitly (Sutskever ∗Corresponding author: Yang Liu. et al., 2014), it is hard to interpret each hidden state in neural networks from a linguistic perspective. On the other hand, prior knowledge in machine translation is usually represented in discrete symbolic forms such as dictionaries and rules (Nirenburg, 1989) that explicitly encode translation regularities. It is difficult to transform prior knowledge represented in discrete forms to continuous representations required by neural networks. Therefore, a number of authors have endeavored to integrate prior knowledge into NMT in recent years, either by modifying model architectures (Tu et al., 2016; Cohn et al., 2016; Tang et al., 2016; Feng et al., 2016) or by modifying training objectives (Cohn et al., 2016; Feng et al., 2016; Cheng et al., 2016). For example, to address the over-translation and under-translation problems widely observed in NMT, Tu et al. (2016) directly extend standard NMT to model the coverage constraint that each source phrase should be translated into exactly one target phrase (Koehn et al., 2003). Alternatively, Cohn et al. (2016) and Feng et al. (2016) propose to control the fertilities of source words by appending additional additive terms to training objectives. Although these approaches have demonstrated clear benefits of incorporating prior knowledge into NMT, how to combine multiple overlapping, arbitrary prior knowledge sources still remains a major challenge. It is difficult to achieve this end by directly modifying model architectures because neural networks usually impose strong independence assumptions between hidden states. As a result, extending a neural model requires that the interdependence of information sources be modeled explicitly (Tu et al., 2016; Tang et al., 2016), making it hard to extend. While this drawback can be partly alleviated by appending additional additive terms to training objectives (Cohn et al., 2016; Feng et al., 2016), these terms are restricted to a 1514 limited number of simple constraints. In this work, we propose a general framework for integrating multiple overlapping, arbitrary prior knowledge sources into NMT using posterior regularization (Ganchev et al., 2010). Our framework is capable of incorporating indirect supervision via posterior distributions of neural translation models. To represent prior knowledge sources as arbitrary real-valued features, we define the posterior distribution as a loglinear model instead of a constrained posterior set (Ganchev et al., 2010). This treatment not only leads to a simpler and more efficient training algorithm but also achieves better translation performance. Experiments show that our approach is able to incorporate a variety of features and achieves significant improvements over posterior regularization using constrained posterior sets on NIST Chinese-English datasets. 2 Background 2.1 Neural Machine Translation Given a source sentence x = x1, . . . , xi, . . . , xI and a target sentence y = y1, . . . , yj, . . . , yJ, a neural translation model (Sutskever et al., 2014; Bahdanau et al., 2015) is usually factorized as a product of word-level translation probabilities: P(y|x; θ) = J Y j=1 P(yj|x, y<j; θ), (1) where θ is a set of model parameters and y<j = y1, . . . , yj−1 denotes a partial translation. The word-level translation probability is defined using a softmax function: P(yj|x, y<j; θ) ∝exp  f(vyj, vx, vy<j, θ)  , (2) where f(·) is a non-linear function, vyj is a vector representation of the j-th target word yj, vx is a vector representation of the source sentence x that encodes the context on the source side, and vy<j is a vector representation of the partial translation y<j that encodes the context on the target side. Given a training set {⟨x(n), y(n)⟩}N n=1, the standard training objective is to maximize the loglikelihood of the training set: ˆθMLE = argmax θ n L(θ) o , (3) where L(θ) = N X n=1 log P(y(n)|x(n); θ). (4) Although the introduction of vector representations into machine translation has resulted in substantial improvements in terms of translation quality (Junczys-Dowmunt et al., 2016), it is difficult to incorporate prior knowledge represented in discrete symbolic forms into NMT. For example, given a Chinese-English dictionary containing ground-truth translational equivalents such as ⟨baigong, the White House⟩, it is non-trivial to leverage the dictionary to guide the learning process of NMT. To address this problem, Tang et al. (2016) propose a new architecture called phraseNet on top of RNNsearch (Bahdanau et al., 2015) that equips standard NMT with an external memory storing phrase tables. Another important prior knowledge source is the coverage constraint (Koehn et al., 2003): each source phrase should be translated into exactly one target phrase. To encode this linguistic intuition into NMT, Tu et al. (2016) extend standard NMT with a coverage vector to keep track of the attention history. While these approaches are capable of incorporating individual prior knowledge sources separately, how to combine multiple overlapping, arbitrary knowledge sources still remains a major challenge. This can be hardly addressed by modifying model architectures because of the lack of interpretability in NMT and the incapability of neural networks in modeling arbitrary knowledge sources. Although modifying training objectives to include additional knowledge sources as additive terms can partially alleviate this problem, these terms have been restricted to a limited number of simple constraints (Cheng et al., 2016; Cohn et al., 2016; Feng et al., 2016) and incapable of combining arbitrary knowledge sources. Therefore, it is important to develop a new framework for integrating arbitrary prior knowledge sources into NMT. 2.2 Posterior Regularization Ganchev et al. (2010) propose posterior regularization for incorporating indirect supervision via constraints on posterior distributions of structured latent-variable models. The basic idea is to penalize the log-likelihood of a neural translation model 1515 with the KL divergence between a desired distribution that incorporates prior knowledge and the model posteriors. The posterior regularized likelihood is defined as F(θ, q) = λ1L(θ) − λ2 N X n=1 min q∈Q KL  q(y) P(y|x(n); θ),  (5) where λ1 and λ2 are hyper-parameters to balance the preference between likelihood and posterior regularization, Q is a set of constrained posteriors: Q = {q(y) : Eq[φ(x, y)] ≤b}, (6) where φ(x, y) is constraint feature and b is the bound of constraint feature expectations. Ganchev et al. (2010) use constraint features to encode structural bias and define the set of valid distributions with respect to the expectations of constraint features to facilitate inference. As maximizing F(θ, q) involves minimizing the KL divergence, Ganchev et al. (2010) present a minorization-maximization algorithm akin to EM at sentence level: E : q(t+1) = argmin q KL  q(y) P(y|x(n); θ(t))  M : θ(t+1) = argmax θ Eq(t+1) h log P(y|x(n); θ) i However, directly applying posterior regularization to neural machine translation faces a major difficulty: it is hard to specify the hyper-parameter b to effectively bound the expectation of features, which are usually real-valued in translation (Och and Ney, 2002; Koehn et al., 2003; Chiang, 2005). For example, the coverage penalty constraint (Wu et al., 2016) proves to be an essential feature for controlling the length of a translation in NMT. As the value of coverage penalty varies significantly over different sentences, it is difficult to set an appropriate bound for all sentences on the training data. In addition, the minorization-maximization algorithm involves an additional step to find q(t+1) as compared with standard NMT, which increases training time significantly. 3 Posterior Regularization for Neural Machine Translation 3.1 Modeling In this work, we propose to adapt posterior regularization (Ganchev et al., 2010) to neural machine translation. The major difference is that we represent the desired distribution as a log-linear model (Och and Ney, 2002) rather than a constrained posterior set as described in (Ganchev et al., 2010): J (θ, γ) = λ1L(θ) − λ2 N X n=1 KL  Q(y|x(n); γ) P(y|x(n); θ)  , (7) where the desired distribution that encodes prior knowledge is defined as: 1 Q(y|x; γ) = exp  γ · φ(x, y)  P y′ exp  γ · φ(x, y′) . (8) As compared to previous work on integrating prior knowledge into NMT (Tu et al., 2016; Cohn et al., 2016; Tang et al., 2016), our approach provides a general framework for combining arbitrary knowledge sources. This is due to log-linear models that offer sufficient flexibility to represent arbitrary prior knowledge sources as features. We tackle the representation discrepancy problem by associating the Q distribution that encodes discrete representations of prior knowledge with neural models using continuous representations learned from data in the KL divergence. Another advantage of our approach is the transparency to model architectures. In principle, our approach can be applied to any neural models for natural language processing. Our approach also differs from the original version of posterior regularization (Ganchev et al., 2010) in the definition of desired distribution. We resort to log-linear models (Och and Ney, 2002) to incorporate features that have proven effective in SMT. Another benefit of using log-linear models is the differentiability of our training objective (see Eq. (7)). It is easy to leverage standard stochastic gradient descent algorithms to optimize model parameters (Section 3.3). 3.2 Feature Design In this section, we introduce how to design features to encode prior knowledge in the desired dis1Ideally, the desired distribution Q should be fixed to guide the learning process of P. However, it is hard to manually specify the feature weights γ. Therefore, we propose to train both θ and λ jointly (see Section 3.3). We find that joint training results in significant improvements in practice (see Table 1). 1516 tribution. Note that not all features in SMT can be adopted to our framework. This is because features in SMT are defined on latent structures such as phrase pairs and synchronous CFG rules, which are not accessible to the decoding process of NMT. Fortunately, we can still leverage internal information in neural models that is linguistically meaningful such as the attention matrix a (Bahdanau et al., 2015). We will introduce a number of features used in our experiments as follows. 3.2.1 Bilingual Dictionary It is natural to leverage a bilingual dictionary D to improve neural machine translation. Arthur et al. (2016) propose to incorporate discrete translation lexicons into NMT by using the attention vector to select lexical probabilities on which to be focused. In our work, for each entry ⟨x, y⟩∈D in the dictionary, a bilingual dictionary (BD) feature is defined at the sentence level: φBD⟨x,y⟩(x, y) =  1 if x ∈x ∧y ∈y 0 otherwise . (9) Note that number of bilingual dictionary features depends on the vocabulary of the neural translation model. Entries containing out-of-vocabulary words has to be discarded. 3.2.2 Phrase Table Phrases, which are sequences of consecutive words, are capable of memorizing local context to deal with word ordering within phrases and translation of short idioms, word insertions or deletions (Koehn et al., 2003; Chiang, 2005). As a result, phrase tables that specify phrase-level correspondences between the source and target languages also prove to be an effective knowledge source in NMT (Tang et al., 2016). Similar to the bilingual dictionary features, we define a phrase table (PT) feature for each entry ⟨˜x, ˜y⟩in a phrase table P: φPT⟨˜x,˜y⟩(x, y) =  1 if ˜x ∈x ∧˜y ∈y 0 otherwise . (10) The number of phrase table features also depends on the vocabulary of the neural translation model. 3.2.3 Coverage Penalty To overcome the over-translation and undertranslation problems widely observed in NMT, a number of authors have proposed to model the fertility (Brown et al., 1993) and converge constraint (Koehn et al., 2003) to improve the adequacy of translation (Tu et al., 2016; Cohn et al., 2016; Feng et al., 2016; Wu et al., 2016; Mi et al., 2016). We follow Wu et al. (2016) to define a coverage penalty (CP) feature to penalize source words with lower sum of attention weights: 2 φCP(x, y) = |x| X i=1 log  min  |y| X j=1 ai,j, 1.0  , (11) where ai,j is the attention probability of the j-th target word on the i-th source word. Note that the value of coverage penalty feature varies significantly over sentences of different lengths. 3.2.4 Length Ratio Controlling the length of translations is very important in NMT as neural models tend to generate short translations for long sentences, which deteriorates the translation performance of NMT for long sentences as compared with SMT (Shen et al., 2016). Therefore, we define the length ratio (LR) feature to encourage the length of a translation to fall in a reasonable range: φLR(x, y) =  (β|x|)/|y| if β|x| < |y| |y|/(β|x|) otherwise , (12) where β is a hyper-parameter for penalizing too long or too short translations. For example, to convey the same meaning, an English sentence is usually about 1.2 times longer than a Chinese sentence. As a result, we can set β = 1.2. If the length of a Chinese sentence |x| is 10 and the length of an English sentence |y| is 12, then, φLR(x, y) = 1. If the translation is too long (e.g., |y| = 100), then the feature value is 0.12. If the translation is too short (e.g., |y| = 6), the feature value is 0.5. 3.3 Training In training, our goal is to find a set of model parameters that maximizes the posterior regularized likelihood: ˆθ, ˆγ = argmax θ,γ n J (θ, γ) o . (13) 2For simplicity, we omit the attention matrix a in the input of the coverage feature function. 1517 Note that unlike the original version of posterior regularization (Ganchev et al., 2010) that relies on a minorization-maximization algorithm to optimize model parameters, our training objective is differentiable with respect to model parameters. Therefore, it is easy to use standard stochastic gradient descent algorithms to train our model. However, a major difficulty in calculating gradients is that the algorithm needs to sum over all candidate translations in an exponential search space for KL divergence. For example, the partial derivative of J (θ, γ) with respect to γ is given by ∂J (θ, γ) ∂γ = −λ2 × N X n=1 ∂ ∂γ KL  Q(y|x(n); γ) P(y|x(n); θ)  . (14) The KL divergence is defined as KL  Q(y|x(n); γ) P(y|x(n); θ)  = X y∈Y(x(n)) Q(y|x(n); γ) log Q(y|x(n); γ) P(y|x(n); θ), (15) where Y(x(n)) is a set of all possible candidate translations for the source sentence x(n). To alleviate this problem, we follow Shen et al. (2016) to approximate the full search space Y(x(n)) with a sampled sub-space S(x(n)). Therefore, the KL divergence can be approximated as KL  Q(y|x(n); γ) P(y|x(n); θ)  ≈ X y∈S(x(n)) ˜Q(y|x(n); γ) log ˜Q(y|x(n); γ) ˜P(y|x(n); θ) . (16) Note that the Q distribution is also approximated on the sub-space: ˜Q(y|x(n); γ) = exp(γ · φ(x(n), y)) P y′∈S(x(n)) exp(γ · φ(x(n), y′)). (17) We follow Shen et al. (2016) to control the sharpness of approximated neural translation distribution normalized on the sampled sub-space: ˜P(y|x(n); θ) = P(y|x(n); θ)α P y′∈S(x(n)) P(y′|x(n); θ)α . (18) 3.4 Search Given learned model parameters ˆθ and ˆγ, the decision rule for translating an unseen source sentence x is given by ˆy = argmax Y(x) n P(y|x; ˆθ) o . (19) The search process can be factorized at the word level: ˆyj = argmax y∈Vy n P(y|x, ˆy<j; ˆθ) o , (20) where Vy is the target language vocabulary. Although this decision rule shares the same efficiency and simplicity with standard NMT (Bahdanau et al., 2015), it does not involve prior knowledge in decoding. Previous studies reveal that incorporating prior knowledge in decoding also significantly boosts translation performance (Arthur et al., 2016; He et al., 2016; Wang et al., 2016). As directly incorporating prior knowledge into the decoding process of NMT depends on both model structure and the locality of features, we resort to a coarse-to-fine approach instead to keep the architecture transparency of our approach. Given a source sentence x in the test set, we first use the neural translation model P(y|x; ˆθ) to generate a k-best list of candidate translation C(x). Then, the algorithm decides on the most probable candidate translation using the following decision rule: ˆy = argmax y∈C(x) n log P(y|x; ˆθ) + ˆγ · φ(x, y) o . (21) 4 Experiments 4.1 Setup We evaluate our approach on Chinese-English translation. The evaluation metric is caseinsensitive BLEU calculated by the multibleu.perl script. Our training set3 consists of 1.25M sentence pairs with 27.9M Chinese words and 34.5M English words. We use the NIST 2002 dataset as validation set and the NIST 2003, 2004, 2005, 2006, 2008 datasets as test sets. In the experiments, we compare our approach with the following two baseline approaches: 3The training set includes LDC2002E18, LDC2003E07, LDC2003E14, part of LDC2004T07, LDC2004T08 and LDC2005T06. 1518 Method Feature MT02 MT03 MT04 MT05 MT06 MT08 All RNNSEARCH N/A 33.45 30.93 32.57 29.86 29.03 21.85 29.11 CPR N/A 33.84 31.18 33.26 30.67 29.63 22.38 29.72 POSTREG BD 34.65 31.53 33.82 30.66 29.81 22.55 29.97 PT 34.56 31.32 33.89 30.70 29.84 22.62 29.99 LR 34.39 31.41 34.19 30.80 29.82 22.85 30.14 BD+PT 34.66 32.05 34.54 31.22 30.70 22.84 30.60 BD+PT+LR 34.37 31.42 34.18 30.99 29.90 22.87 30.20 this work BD 36.61 33.47 36.04 32.96 32.46 24.78 32.27 PT 35.07 32.11 34.73 31.84 30.82 23.23 30.86 CP 34.68 31.99 34.67 31.37 30.80 23.34 30.76 LR 34.57 31.89 34.95 31.80 31.43 23.75 31.12 BD+PT 36.30 33.83 36.02 32.98 32.53 24.54 32.29 BD+PT+CP 36.11 33.64 36.36 33.11 32.53 24.57 32.39 BD+PT+CP+LR 36.10 33.64 36.48 33.08 32.90 24.63 32.51 Table 1: Comparison of BLEU scores on the Chinese-English datasets. RNNSEARCH is an attentionbased neural machine translation model (Bahdanau et al., 2015) that does not incorporate prior knowledge. CPR extends RNNSEARCH by introducing coverage penalty refinement (Eq. (11)) in decoding. POSTREG extends RNNSEARCH with posterior regularization (Ganchev et al., 2010), which uses constraint features to represent prior knowledge and a constrained posterior set to denote the desired distribution. Note that POSTREG cannot use the CP feature (Section 3.2.3) because it is hard to bound the feature value appropriately. On top of RNNSEARCH, our approach also exploits posterior regularization to incorporate prior knowledge but uses a log-linear model to denote the desired distribution. All results of this work are significantly better than RNNSEARCH (p < 0.01). 1. RNNSEARCH (Bahdanau et al., 2015): a standard attention-based neural machine translation model, 2. CPR (Wu et al., 2016): extending RNNSEARCH by introducing coverage penalty refinement (Eq. (11)) in decoding, 3. POSTREG (Ganchev et al., 2010): extending RNNSEARCH with posterior regularization using constrained posterior set. For RNNSEARCH, we use an in-house attention-based NMT system that achieves comparable translation performance with GROUNDHOG (Bahdanau et al., 2015), which serves as a baseline approach in our experiments. We limit vocabulary size to 30K for both languages. The word embedding dimension is set to 620. The dimension of hidden layer is set to 1,000. In training, the batch size is set to 80. We use the AdaDelta algorithm (Zeiler, 2012) for optimizing model parameters. In decoding, the beam size is set to 10. For CPR, we simply follow Wu et al. (2016) to incorporate the coverage penalty into the beam search algorithm of RNNSEARCH. For POSTREG, we adapt the original version of posterior regularization (Ganchev et al., 2010) to NMT on top of RNNSEARCH. Following Ganchev et al. (2010), we use a ten-step projected gradient descent algorithm to search for an approximate desired distribution in the E step and a one-step gradient descent for the M step. Our approach extends RNNSEARCH by incorporating prior knowledge. For each source sentence, we sample 80 candidate translations to approximate the ˜P and ˜Q distributions. The hyperparameter α is set to 0.2. The batch size is 1. The hyper-parameters λ1 and λ2 are set to 8×10−5 and 2.5 × 10−4. Note that they not only balance the preference between likelihood and posterior regularization, but also control the values of gradients to fall in a reasonable range for optimization. We construct bilingual dictionary and phrase table in an automatic way. First, we run the statistical machine translation system MOSES (Koehn and Hoang, 2007) to obtain probabilistic bilingual dictionary and phrase table. For the bilingual dictionary, we retain entries with probabilities higher than 0.1 in both source-to-target and 1519 Feature Rerank MT02 MT03 MT04 MT05 MT06 MT08 All BD w/o 36.06 32.99 35.62 32.59 32.13 24.36 31.87 w/ 36.61 33.47 36.04 32.96 32.46 24.78 32.27 PT w/o 34.98 32.01 34.71 31.77 30.77 23.20 30.81 w/ 35.07 32.11 34.73 31.84 30.82 23.23 30.86 CP w/o 34.68 31.99 34.67 31.37 30.80 23.34 30.76 w/ 34.68 31.99 34.67 31.37 30.80 23.34 30.76 LR w/o 34.60 31.89 34.79 31.72 31.39 23.63 31.03 w/ 34.57 31.89 34.95 31.80 31.43 23.75 31.12 BD+PT w/o 35.76 33.27 35.64 32.47 32.03 24.17 31.83 w/ 36.30 33.83 36.02 32.98 32.53 24.54 32.29 BD+PT+CP w/o 35.71 33.15 35.81 32.52 32.16 24.11 31.89 w/ 36.11 33.64 36.36 33.11 32.53 24.57 32.39 BD+PT+CP+LR w/o 36.06 33.01 35.86 32.70 32.24 24.27 31.96 w/ 36.10 33.64 36.48 33.08 32.90 24.63 32.51 Table 2: Effect of reranking on translation quality. target-to-source directions. For phrase table, we first remove phrase pairs that occur less than 10 times and then retain entries with probabilities higher than 0.5 in both directions. As a result, both bilingual dictionary and phrase table contain highquality translation correspondences. We estimate the length ratio on Chinese-English data and set the hyper-parameter β to 1.236. By default, both POSTREG and our approach use reranking to search for the most probable translations (Section 3.4). 4.2 Main Results Table 1 shows the BLEU scores obtained by RNNSEARCH, POSTREG, and our approach on the Chinese-English datasets. We find POSTREG achieves significant improvements over RNNSEARCH by adding features that encode prior knowledge. The most effective single feature for POSTREG seems to be the length ratio (LR) feature, suggesting that it is important for NMT to control the length of translation to improve translation quality. Note that POSTREG is unable to include the coverage penalty (CP) feature because the feature value varies significantly over different sentences. It is hard to specify an appropriate bound b for constraining the expected feature value. We observe that a loose bound often makes the training process very unstable and fail to converge. Combining features obtains further modest improvements. Our approach outperforms both RNNSEARCH and POSTREG significantly. The bilingual dictionary (BD) feature turns out to make the most contribution. Compared with CPR that imposes coverage penalty during decoding, our approach that using a single CP feature obtains a significant improvement (i.e., 30.76 over 29.72), suggesting that incorporating prior knowledge sources in modeling might be more beneficial than in decoding. We find that combining features only results in modest improvements for our approach. One possible reason is that the bilingual dictionary and phrase table features overlap on single word pairs. 4.3 Effect of Reranking Table 2 shows the effect of reranking on translation quality. We find that using prior knowledge features to rescore the k-best list produced by the neural translation model usually leads to improvements. This finding confirms that adding prior knowledge is beneficial for NMT, either in the training or decoding process. 4.4 Training Speed Initialized with the best RNNSEARCH model trained for 300K iterations, our model converges after about 100K iterations. For each iteration, our approach is 1.5 times slower than RNNSEARCH. On a single GPU device Tesla M40, it takes four days to train the RNNSEARCH model and three extra days to train our model. 4.5 Example Translations Table 3 gives four examples to demonstrate the benefits of adding features. 1520 Source lijing liang tian yu bingxue de fenzhan , 31ri shenye 23 shi 50 fen , shanghai jichang jituan yuangong yinglai le 2004nian de zuihou yige hangban . Reference after fighting with ice and snow for two days , staff members of shanghai airport group welcomed the last flight of 2004 at 23 : 50pm on the 31st . RNNSEARCH after a two - day and two - day journey , the team of shanghai ’s airport in shanghai has ushered in the last flight in 2004 . + BD after two days and nights fighting with ice and snow , the shanghai airport group ’s staff welcomed the last flight in 2004 . Source suiran tonghuopengzhang weilai ji ge yue reng jiang weizhi zai baifenzhier yishang , buguo niandi zhiqian keneng jiangdi . Reference although inflation will remain above 2 % for the coming few months , it may decline by the end of the year . RNNSEARCH although inflation has been maintained for more than two months from the year before the end of the year , it may be lower . + PT although inflation will remain at more than 2 percent in the next few months , it may be lowered before the end of the year . Source qian ji tian ta ganggang chuyuan , jintian jianchi lai yu lao pengyou daobie . Reference just discharged from the hospital a few days ago , he insisted on coming to say farewell to his old friend today . RNNSEARCH during the previous few days , he had just been given treatment to the old friends . + CP during the previous few days , he had just been discharged from the hospital , and he insisted on goodbye to his old friend today . Source ( guoji ) yiselie fuzongli fouren jihua kuojian gelan gaodi dingjudian Reference ( international ) israeli deputy prime minister denied plans to expand golan heights settlements RNNSEARCH ( world ) israeli deputy prime minister denies the plan to expand the golan heights in the golan heights + LR ( international ) israeli deputy prime minister denies planning to expand golan heights Table 3: Example translations that demonstrate the effect of adding features. In the first example, source words “fenzhan” (fighting), “yuangong” (staff), and “yinglai” (welcomed) are untranslated in the output of RNNSEARCH. Adding the bilingual dictionary (BD) feature encourages the model to translate these words if they occur in the dictionary. In the second example, while RNNSEARCH fails to capture phrase cohesion, adding the phrase table (PT) feature is beneficial for translating short idioms, word insertions or deletions that are sensitive to local context. In the third example, RNNSEARCH tends to omit many source content words such as “chuyuan” (discharged from the hospital), “jianchi” (insisted on), and “daobie” (say farewell). The coverage penalty (CP) feature helps to alleviate the word omission problem. In the fourth example, the translation produced by RNNSEARCH is too long and “the golan heights” occurs twice. The length ratio (LR) feature is capable of controlling the sentence length in a reasonable range. 5 Related Work Our work is directly inspired by posterior regularization (Ganchev et al., 2010). The major difference is that we use a log-linear model to represent the desired distribution rather than a constrained posterior set. Using log-linear models not only enables our approach to incorporate arbitrary knowledge sources as real-valued features, but also is differentiable to be jointly trained with 1521 neural translation models efficiently. Our work is closely related to recent work on injecting prior knowledge into NMT (Arthur et al., 2016; Tu et al., 2016; Cohn et al., 2016; Tang et al., 2016; Feng et al., 2016; Wang et al., 2016). The major difference is that our approach aims to provide a general framework for incorporating arbitrary prior knowledge sources while keeping the neural translation model unchanged. He et al. (2016) also propose to combine the strengths of neural networks on learning representations and log-linear models on encoding prior knowledge. But they treat neural translation models as a feature in the log-linear model. In contrast, we connect the two models via KL divergence to keep the transparency of our approach to model architectures. This enables our approach to be easily applied to other neural models in NLP. 6 Conclusion We have presented a general framework for incorporating prior knowledge into end-to-end neural machine translation based on posterior regularization (Ganchev et al., 2010). The basic idea is to guide NMT models towards desired behavior using a log-linear model that encodes prior knowledge. Experiments show that incorporating prior knowledge leads to significant improvements over both standard NMT and posterior regularization using constrained posterior sets. Acknowledgments We thank Shiqi Shen for useful discussions and anonymous reviewers for insightful comments. This work is supported by the National Natural Science Foundation of China (No.61432013), the 973 Program (2014CB340501), and the National Natural Science Foundation of China (No.61522204). This research is also supported by Sogou Inc. and the Singapore National Research Foundation under its International Research Centre@Singapore Funding Initiative and administered by the IDM Programme. References Philip Arthur, Graham Neubug, and Satoshi Nakamura. 2016. Incorporating discrete translation lexicons into neural machine translation. arXiv:1606.02006v2. Dzmitry Bahdanau, KyungHyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics . Yong Cheng, Shiqi Shen, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Agreement-based learning of parallel lexicons and phrases from non-parallel corpora. In Proceedings of IJCAI. David Chiang. 2005. A hirarchical phrase-based model for statistical machine translation. In Proceedings of ACL. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of EMNLP. Trevor Cohn, Cong Duy Vu Hoang, Ekaterina Vymolova, Kaisheng Yao, Chris Dyer, and Gholamreza Haffari. 2016. Incorporating structural alignment biases into an attentional neural translation model. In Proceedings of NAACL. Shi Feng, Shujie Liu, Nan Yang, Mu Li, Ming Zhou, and Kenny Q. Zhu. 2016. Improving attention modeling with implicit distortion and fertility for machine translation. In Proceedings of COLING. Kuzman Ganchev, Jo˜ao Grac¸a, Jennifer Gillenwater, and Ben Taskar. 2010. Posterior regularization for structured latent variable models. Journal of Machine Learning Research . Wei He, Zhongjun He, Hua Wu, and Haifeng Wang. 2016. Improved nerual machine translation with SMT features. In Proceedings of AAAI. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation . Marcin Junczys-Dowmunt, Tomasz Dwojak, and Hieu Hoang. 2016. Is neural machine translation ready for deployment? a case study on 30 translation directions. arXiv:1610.01108v2. Philipp Koehn and Hieu Hoang. 2007. Factored translation models. In Proceedings of EMNLP. Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of NAACL. Haitao Mi, Baskaran Sankaran, Zhiguo Wang, and Abe Ittycheriah. 2016. Coverage embedding models for neural machine translation. In Proceedings of EMNLP. Sergei Nirenburg. 1989. Knowledge-based machine translation. Machine Translation . 1522 Franz J. Och and Hermann Ney. 2002. Discriminative training and maximum entropy models for statistical machine translation. In Proceedings of ACL. Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum risk training for neural machine translation. In Proceedings of ACL. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of NIPS. Yaohua Tang, Fandong Meng, Zhengdong Lu, Hang Li, and Philip L. H. Yu. 2016. Neural machine translation with external phrase memory. arXiv:1606.01792v1. Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In Proceedings of ACL. Xing Wang, Zhengdong Lu, Zhaopeng Tu, Hang Li, Deyi Xiong, and Min Zhang. 2016. Neural machine translation advised by statistical machine translation. arXiv:1610.05150. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv:1609.08144v2. Matthew D. Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv:1212.5701. 1523
2017
139
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 146–157 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1014 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 146–157 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1014 Neural AMR: Sequence-to-Sequence Models for Parsing and Generation Ioannis Konstas† Srinivasan Iyer† Mark Yatskar† Yejin Choi† Luke Zettlemoyer†‡ †Paul G. Allen School of Computer Science & Engineering, Univ. of Washington, Seattle, WA {ikonstas,sviyer,my89,yejin,lsz}@cs.washington.edu ‡Allen Institute for Artificial Intelligence, Seattle, WA [email protected] Abstract Sequence-to-sequence models have shown strong performance across a broad range of applications. However, their application to parsing and generating text using Abstract Meaning Representation (AMR) has been limited, due to the relatively limited amount of labeled data and the nonsequential nature of the AMR graphs. We present a novel training procedure that can lift this limitation using millions of unlabeled sentences and careful preprocessing of the AMR graphs. For AMR parsing, our model achieves competitive results of 62.1 SMATCH, the current best score reported without significant use of external semantic resources. For AMR generation, our model establishes a new state-of-the-art performance of BLEU 33.8. We present extensive ablative and qualitative analysis including strong evidence that sequencebased AMR models are robust against ordering variations of graph-to-sequence conversions. 1 Introduction Abstract Meaning Representation (AMR) is a semantic formalism to encode the meaning of natural language text. As shown in Figure 1, AMR represents the meaning using a directed graph while abstracting away the surface forms in text. AMR has been used as an intermediate meaning representation for several applications including machine translation (MT) (Jones et al., 2012), summarization (Liu et al., 2015), sentence compression (Takase et al., 2016), and event extraction (Huang et al., 2016). While AMR allows for rich semantic representation, annotating training data in AMR is expensive, which in turn limits the use Obama was elected and his voters celebrated Obama elect.01 celebrate.01 vote.01 and * op1 op2 ARG0 poss ARG0 person name name op1 person ARG0-of Figure 1: An example sentence and its corresponding Abstract Meaning Representation (AMR). AMR encodes semantic dependencies between entities mentioned in the sentence, such as “Obama” being the “arg0” of the verb “elected”. of neural network models (Misra and Artzi, 2016; Peng et al., 2017; Barzdins and Gosko, 2016). In this work, we present the first successful sequence-to-sequence (seq2seq) models that achieve strong results for both text-to-AMR parsing and AMR-to-text generation. Seq2seq models have been broadly successful in many other applications (Wu et al., 2016; Bahdanau et al., 2015; Luong et al., 2015; Vinyals et al., 2015). However, their application to AMR has been limited, in part because effective linearization (encoding graphs as linear sequences) and data sparsity were thought to pose significant challenges. We show that these challenges can be easily overcome, by demonstrating that seq2seq models can be trained using any graph-isomorphic linearization and that unlabeled text can be used to significantly reduce sparsity. Our approach is two-fold. First, we introduce a novel paired training procedure that enhances both the text-to-AMR parser and AMR-to-text generator. More concretely, first we use self-training to 146 bootstrap a high quality AMR parser from millions of unlabeled Gigaword sentences (Napoles et al., 2012) and then use the automatically parsed AMR graphs to pre-train an AMR generator. This paired training allows both the parser and generator to learn high quality representations of fluent English text from millions of weakly labeled examples, that are then fine-tuned using human annotated AMR data. Second, we propose a preprocessing procedure for the AMR graphs, which includes anonymizing entities and dates, grouping entity categories, and encoding nesting information in concise ways, as illustrated in Figure 2(d). This preprocessing procedure helps overcoming the data sparsity while also substantially reducing the complexity of the AMR graphs. Under such a representation, we show that any depth first traversal of the AMR is an effective linearization, and it is even possible to use a different random order for each example. Experiments on the LDC2015E86 AMR corpus (SemEval-2016 Task 8) demonstrate the effectiveness of the overall approach. For parsing, we are able to obtain competitive performance of 62.1 SMATCH without using any external annotated examples other than the output of a NER system, an improvement of over 10 points relative to neural models with a comparable setup. For generation, we substantially outperform previous best results, establishing a new state of the art of 33.8 BLEU. We also provide extensive ablative and qualitative analysis, quantifying the contributions that come from preprocessing and the paired training procedure. 2 Related Work Alignment-based Parsing Flanigan et al. (2014) (JAMR) pipeline concept and relation identification with a graph-based algorithm. Zhou et al. (2016) extend JAMR by performing the concept and relation identification tasks jointly with an incremental model. Both systems rely on features based on a set of alignments produced using bi-lexical cues and hand-written rules. In contrast, our models train directly on parallel corpora, and make only minimal use of alignments to anonymize named entities. Grammar-based Parsing Wang et al. (2016) (CAMR) perform a series of shift-reduce transformations on the output of an externally-trained dependency parser, similar to Damonte et al. (2017), Brandt et al. (2016), Puzikov et al. (2016), and Goodman et al. (2016). Artzi et al. (2015) use a grammar induction approach with Combinatory Categorical Grammar (CCG), which relies on pretrained CCGBank categories, like Bjerva et al. (2016). Pust et al. (2015) recast parsing as a string-to-tree Machine Translation problem, using unsupervised alignments (Pourdamghani et al., 2014), and employing several external semantic resources. Our neural approach is engineering lean, relying only on a large unannotated corpus of English and algorithms to find and canonicalize named entities. Neural Parsing Recently there have been a few seq2seq systems for AMR parsing (Barzdins and Gosko, 2016; Peng et al., 2017). Similar to our approach, Peng et al. (2017) deal with sparsity by anonymizing named entities and typing low frequency words, resulting in a very compact vocabulary (2k tokens). However, we avoid reducing our vocabulary by introducing a large set of unlabeled sentences from an external corpus, therefore drastically lowering the out-of-vocabulary rate (see Section 6). AMR Generation Flanigan et al. (2016) specify a number of tree-to-string transduction rules based on alignments and POS-based features that are used to drive a tree-based SMT system. Pourdamghani et al. (2016) also use an MT decoder; they learn a classifier that linearizes the input AMR graph in an order that follows the output sentence, effectively reducing the number of alignment crossings of the phrase-based decoder. Song et al. (2016) recast generation as a traveling salesman problem, after partitioning the graph into fragments and finding the best linearization order. Our models do not need to rely on a particular linearization of the input, attaining comparable performance even with a per example random traversal of the graph. Finally, all three systems intersect with a large language model trained on Gigaword. We show that our seq2seq model has the capacity to learn the same information as a language model, especially after pretraining on the external corpus. Data Augmentation Our paired training procedure is largely inspired by Sennrich et al. (2016). They improve neural MT performance for low resource language pairs by using a back-translation MT system for a large monolingual corpus of the target language in order to create synthetic output, 147 and mixing it with the human translations. We instead pre-train on the external corpus first, and then fine-tune on the original dataset. 3 Methods In this section, we first provide the formal definition of AMR parsing and generation (section 3.1). Then we describe the sequence-to-sequence models we use (section 3.2), graph-to-sequence conversion (section 3.3), and our paired training procedure (section 3.4). 3.1 Tasks We assume access to a training dataset D where each example pairs a natural language sentence s with an AMR a. The AMR is a rooted directed acylical graph. It contains nodes whose names correspond to sense-identified verbs, nouns, or AMR specific concepts, for example elect.01, Obama, and person in Figure 1. One of these nodes is a distinguished root, for example, the node and in Figure 1. Furthermore, the graph contains labeled edges, which correspond to PropBank-style (Palmer et al., 2005) semantic roles for verbs or other relations introduced for AMR, for example, arg0 or op1 in Figure 1. The set of node and edge names in an AMR graph is drawn from a set of tokens C, and every word in a sentence is drawn from a vocabulary W. We study the task of training an AMR parser, i.e., finding a set of parameters θP for model f, that predicts an AMR graph ˆa, given a sentence s: ˆa = argmax a f a|s; θP  (1) We also consider the reverse task, training an AMR generator by finding a set of parameters θG, for a model f that predicts a sentence ˆs, given an AMR graph a: ˆs = argmax s f s|a; θG  (2) In both cases, we use the same family of predictors f, sequence-to-sequence models that use global attention, but the models have independent parameters, θP and θG. 3.2 Sequence-to-sequence Model For both tasks, we use a stacked-LSTM sequenceto-sequence neural architecture employed in neural machine translation (Bahdanau et al., 2015; Wu et al., 2016).1 Our model uses a global attention decoder and unknown word replacement with small modifications (Luong et al., 2015). The model uses a stacked bidirectional-LSTM encoder to encode an input sequence and a stacked LSTM to decode from the hidden states produced by the encoder. We make two modifications to the encoder: (1) we concatenate the forward and backward hidden states at every level of the stack instead of at the top of the stack, and (2) introduce dropout in the first layer of the encoder. The decoder predicts an attention vector over the encoder hidden states using previous decoder states. The attention is used to weigh the hidden states of the encoder and then predict a token in the output sequence. The weighted hidden states, the decoded token, and an attention signal from the previous time step (input feeding) are then fed together as input to the next decoder state. The decoder can optionally choose to output an unknown word symbol, in which case the predicted attention is used to copy a token directly from the input sequence into the output sequence. 3.3 Linearization Our seq2seq models require that both the input and target be presented as a linear sequence of tokens. We define a linearization order for an AMR graph as any sequence of its nodes and edges. A linearization is defined as (1) a linearization order and (2) a rendering function that generates any number of tokens when applied to an element in the linearization order (see Section 4.2 for implementation details). Furthermore, for parsing, a valid AMR graph must be recoverable from the linearization. 3.4 Paired Training Obtaining a corpus of jointly annotated pairs of sentences and AMR graphs is expensive and current datasets only extend to thousands of examples. Neural sequence-to-sequence models suffer from sparsity with so few training pairs. To reduce the effect of sparsity, we use an external unannotated corpus of sentences Se, and a procedure which pairs the training of the parser and generator. Our procedure is described in Algorithm 1, and first trains a parser on the dataset D of pairs of sentences and AMR graphs. Then it uses self-training 1We extended the Harvard NLP seq2seq framework from http://nlp.seas.harvard.edu/code. 148 Algorithm 1 Paired Training Procedure Input: Training set of sentences and AMR graphs (s, a) ∈ D, an unannotated external corpus of sentences Se, a number of self training iterations, N, and an initial sample size k. Output: Model parameters for AMR parser θP and AMR generator θG. 1: θP ←Train parser on D ▷Self-train AMR parser. 2: S1 e ←sample k sentences from Se 3: for i = 1 to N do 4: Ai e ←Parse Si e using parameters θP ▷Pre-train AMR parser. 5: θP ←Train parser on (Ai e, Si e) ▷Fine tune AMR parser. 6: θP ←Train parser on D with initial parameters θP 7: Si+1 e ←sample k · 10i new sentences from Se 8: end for 9: SN e ←sample k · 10N new sentences from Se ▷Pre-train AMR generator. 10: Ae ←Parse SN e using parameters θP 11: θG ←Train generator on (AN e , SN e ) ▷Fine tune AMR generator. 12: θG ←Train generator on D using initial parameters θG 13: return θP , θG to improve the initial parser. Every iteration of self-training has three phases: (1) parsing samples from a large, unlabeled corpus Se, (2) creating a new set of parameters by training on Se, and (3) fine-tuning those parameters on the original paired data. After each iteration, we increase the size of the sample from Se by an order of magnitude. After we have the best parser from self-training, we use it to label AMRs for Se and pre-train the generator. The final step of the procedure fine-tunes the generator on the original dataset D. 4 AMR Preprocessing We use a series of preprocessing steps, including AMR linerization, anonymization, and other modifications we make to sentence-graph pairs. Our methods have two goals: (1) reduce the complexity of the linearized sequences to make learning easier while maintaining enough original information, and (2) address sparsity from certain open class vocabulary entries, such as named entities (NEs) and quantities. Figure 2(d) contains example inputs and outputs with all of our preprocessing techniques. Graph Simplification In order to reduce the overall length of the linearized graph, we first remove variable names and the instance-of relation ( / ) before every concept. In case of re-entrant nodes we replace the variable mention with its co-referring concept. Even though this replacement incurs loss of information, often the surrounding context helps recover the correct realization, e.g., the possessive role :poss in the example of Figure 1 is strongly correlated with the surface form his. Following Pourdamghani et al. (2016) we also remove senses from all concepts for AMR generation only. Figure 2(a) contains an example output after this stage. 4.1 Anonymization of Named Entities Open-class types including NEs, dates, and numbers account for 9.6% of tokens in the sentences of the training corpus, and 31.2% of vocabulary W. 83.4% of them occur fewer than 5 times in the dataset. In order to reduce sparsity and be able to account for new unseen entities, we perform extensive anonymization. First, we anonymize sub-graphs headed by one of AMR’s over 140 fine-grained entity types that contain a :name role. This captures structures referring to entities such as person, country, miscellaneous entities marked with *-enitity, and typed numerical values, *-quantity. We exclude date entities (see the next section). We then replace these sub-graphs with a token indicating fine-grained type and an index, i, indicating it is the ith occurrence of that type.2 For example, in Figure 2 the sub-graph headed by country gets replaced with country 0. On the training set, we use alignments obtained using the JAMR aligner (Flanigan et al., 2014) and the unsupervised aligner of Pourdamghani et al. (2014) in order to find mappings of anonymized subgraphs to spans of text and replace mapped text with the anonymized token that we inserted into the AMR graph. We record this mapping for use during testing of generation models. If a generation model predicts an anonymization token, we find the corresponding token in the AMR graph and replace the model’s output with the most frequent mapping observed during training for the entity name. If the entity was never observed, we copy its name directly from the AMR graph. Anonymizing Dates For dates in AMR graphs, we use separate anonymization tokens for year, month-number, month-name, day-number and day-name, indicating whether the date is mentioned by word or by number.3 In AMR gener2In practice we only used three groups of ids: a different one for NEs, dates and constants/numbers. 3We also use three date format markers that appear in the text as: YYYYMMDD, YYMMDD, and YYYY-MM-DD. 149 US officials held an expert group meeting in January 2002 in New York. (h / hold-04 :ARG0 (p2 / person :ARG0-of (h2 / have-org-role-91 :ARG1 (c2 / country :name (n3 / name :op1 “United" op2: “States”)) :ARG2 (o / official))) :ARG1 (m / meet-03 :ARG0 (p / person :ARG1-of (e / expert-01) :ARG2-of (g / group-01))) :time (d2 / date-entity :year 2002 :month 1) :location (c / city :name (n / name :op1 “New" :op2 “York”))) hold :ARG0 person :ARG0-of have-org-role :ARG1 loc_0 :ARG2 official :ARG1 meet :ARG0 person :ARG1-of expert :ARG2-of group :time date-entity year_0 month_0 :location loc_1 hold :ARG0 person :ARG0-of have-org-role :ARG1 country_0 :ARG2 official :ARG1 meet :ARG0 person :ARG1-of expert :ARG2-of group :time date-entity year_0 month_0 :location city_1 hold :ARG0 person :ARG0-of have-org-role :ARG1 country :name name :op1 United :op2 States :ARG2 official :ARG1 meet :ARG0 person :ARG1-of expert :ARG2-of group :time date-entity :year 2002 :month 1 :location city :name name :op1 New :op2 York hold :ARG0 ( person :ARG0-of ( have-org-role :ARG1 loc_0 :ARG2 official ) ) :ARG1 ( meet :ARG0 ( person :ARG1-of expert :ARG2-of group ) ) :time ( date-entity year_0 month_0 ) :location loc_1 US officials held an expert group meeting in January 2002 in New York. country_0 officials held an expert group meeting in month_0 year_0 in city_1. loc_0 officials held an expert group meeting in month_0 year_0 in loc_1. loc_0 officials held an expert group meeting in month_0 year_0 in loc_1. (a) (b) (c) (d) Figure 2: Preprocessing methods applied to sentence (top row) - AMR graph (left column) pairs. Sentence-graph pairs after (a) graph simplification, (b) named entity anonymization, (c) named entity clustering, and (d) insertion of scope markers. ation, we render the corresponding format when predicted. Figure 2(b) contains an example of all preprocessing up to this stage. Named Entity Clusters When performing AMR generation, each of the AMR fine-grained entity types is manually mapped to one of the four coarse entity types used in the Stanford NER system (Finkel et al., 2005): person, location, organization and misc. This reduces the sparsity associated with many rarely occurring entity types. Figure 2 (c) contains an example with named entity clusters. NER for Parsing When parsing, we must normalize test sentences to match our anonymized training data. To produce fine-grained named entities, we run the Stanford NER system and first try to replace any identified span with a fine-grained category based on alignments observed during training. If this fails, we anonymize the sentence using the coarse categories predicted by the NER system, which are also categories in AMR. After parsing, we deterministically generate AMR for anonymizations using the corresponding text span. 4.2 Linearization Linearization Order Our linearization order is defined by the order of nodes visited by depth first search, including backward traversing steps. For example, in Figure 2, starting at meet the order contains meet, :ARG0, person, :ARG1-of, expert, :ARG2-of, group, :ARG2-of, :ARG1-of, :ARG0.4 The order traverses children in the sequence they are presented in the AMR. We consider alternative orderings of children in Section 7 but always follow the pattern demonstrated above. Rendering Function Our rendering function marks scope, and generates tokens following the pre-order traversal of the graph: (1) if the element is a node, it emits the type of the node. (2) if the element is an edge, it emits the type of the edge and then recursively emits a bracketed string for the (concept) node immediately after it. In case the node has only one child we omit the scope markers (denoted with left “(”, and right “)” parentheses), thus significantly reducing the number of generated tokens. Figure 2(d) contains an example showing all of the preprocessing techniques and scope markers that we use in our full model. 5 Experimental Setup We conduct all experiments on the AMR corpus used in SemEval-2016 Task 8 (LDC2015E86), which contains 16,833/1,368/1,371 train/dev/test examples. For the paired training procedure of Algorithm 1, we use Gigaword as our external corpus and sample sentences that only contain words from the AMR corpus vocabulary W. We subsampled the original sentence to ensure there is no overlap with the AMR training or test sets. Table 2 4Sense, instance-of and variable information has been removed at the point of linearization. 150 Dev Test Model Prec Rec F1 Prec Rec F1 SBMT (Pust et al., 2015) 69.0 67.1 CAMR (Wang et al., 2016) 72.3 61.4 66.6 70.4 63.1 66.5 CCG* (Artzi et al., 2015) 67.2 65.1 66.1 66.8 65.7 66.3 JAMR (Flanigan et al., 2014) 64.0 53.0 58.0 GIGA-20M 62.2 66.0 64.4 59.7 64.7 62.1 GIGA-2M 61.9 64.8 63.3 60.2 63.6 61.9 GIGA-200k 59.7 62.9 61.3 57.8 60.9 59.3 AMR-ONLY 54.9 60.0 57.4 53.1 58.1 55.5 SEQ2SEQ (Peng et al., 2017) 55.0 50.0 52.0 CHAR-LSTM (Barzdins and Gosko, 2016) 43.0 Table 1: SMATCH scores for AMR Parsing. *Reported numbers are on the newswire portion of a previous release of the corpus (LDC2014T12). summarizes statistics about the original dataset and the extracted portions of Gigaword. We evaluate AMR parsing with SMATCH (Cai and Knight, 2013), and AMR generation using BLEU (Papineni et al., 2002)5. We validated word embedding sizes and RNN hidden representation sizes by maximizing AMR development set performance (Algorithm 1 – line 1). We searched over the set {128, 256, 500, 1024} for the best combinations of sizes and set both to 500. Models were trained by optimizing cross-entropy loss with stochastic gradient descent, using a batch size of 100 and dropout rate of 0.5. Across all models when performance does not improve on the AMR dev set, we decay the learning rate by 0.8. For the initial parser trained on the AMR corpus, (Algorithm 1 – line 1), we use a single stack version of our model, set initial learning rate to 0.5 and train for 60 epochs, taking the best performing model on the development set. All subsequent models benefited from increased depth and we used 2-layer stacked versions, maintaining the same embedding sizes. We set the initial Gigaword sample size to k = 200, 000 and executed a maximum of 3 iterations of self-training. For pretraining the parser and generator, (Algorithm 1 – lines 4 and 9), we used an initial learning rate of 1.0, and ran for 20 epochs. We attempt to fine-tune the parser and generator, respectively, after every epoch of pre-training, setting the initial learning rate to 0.1. We select the best performing model on the development set among all of these fine-tuning 5We use the multi-BLEU script from the MOSES decoder suite (Koehn et al., 2007). Corpus Examples OOV@1 OOV@5 AMR 16833 44.7 74.9 GIGA-200k 200k 17.5 35.3 GIGA-2M 2M 11.2 19.1 GIGA-20M 20M 8.0 12.7 Table 2: LDC2015E86 AMR training set, GIGA-200k, GIGA-2M and GIGA-20M statistics; OOV@1 and OOV@5 are the out-of-vocabulary rates on the NL side with thresholds of 1 and 5, respectively. Vocabulary sizes are 13027 tokens for the AMR side, and 17319 tokens for the NL side. Model Dev Test GIGA-20M 33.1 33.8 GIGA-2M 31.8 32.3 GIGA-200k 27.2 27.4 AMR-ONLY 21.7 22.0 PBMT* (Pourdamghani et al., 2016) 27.2 26.9 TSP (Song et al., 2016) 21.1 22.4 TREETOSTR (Flanigan et al., 2016) 23.0 23.0 Table 3: BLEU results for AMR Generation. *Model has been trained on a previous release of the corpus (LDC2014T12). attempts. During prediction we perform decoding using beam search and set the beam size to 5 both for parsing and generation. 6 Results Parsing Results Table 1 summarizes our development results for different rounds of self-training and test results for our final system, self-trained on 200k, 2M and 20M unlabeled Gigaword sentences. Through every round of self-training, our 151 parser improves. Our final parser outperforms comparable seq2seq and character LSTM models by over 10 points. While much of this improvement comes from self-training, our model without Gigaword data outperforms these approaches by 3.5 points on F1. We attribute this increase in performance to different handling of preprocessing and more careful hyper-parameter tuning. All other models that we compare against use semantic resources, such as WordNet, dependency parsers or CCG parsers (models marked with * were trained with less data, but only evaluate on newswire text; the rest evaluate on the full test set, containing text from blogs). Our full models outperform JAMR, a graph-based model but still lags behind other parser-dependent systems (CAMR6), and resource heavy approaches (SBMT). Generation Results Table 3 summarizes our AMR generation results on the development and test set. We outperform all previous state-of-theart systems by the first round of self-training and further improve with the next rounds. Our final model trained on GIGA-20M outperforms TSP and TREETOSTR trained on LDC2015E86, by over 9 BLEU points.7 Overall, our model incorporates less data than previous approaches as all reported methods train language models on the whole Gigaword corpus. We leave scaling our models to all of Gigaword for future work. Sparsity Reduction Even after anonymization of open class vocabulary entries, we still encounter a great deal of sparsity in vocabulary given the small size of the AMR corpus, as shown in Table 2. By incorporating sentences from Gigaword we are able to reduce vocabulary sparsity dramatically, as we increase the size of sampled sentences: the out-of-vocabulary rate with a threshold of 5 reduces almost 5 times for GIGA-20M. Preprocessing Ablation Study We consider the contribution of each main component of our preprocessing stages while keeping our linearization order identical. Figure 2 contains examples for each setting of the ablations we evaluate on. First we evaluate using linearized graphs without paren6Since we are currently not using any Wikipedia resources for the prediction of named entities, we compare against the no-wikification version of the CAMR system. 7We also trained our generator on GIGA-2M and finetuned on LDC2014T12 in order to have a direct comparison with PBMT, and achieved a BLEU score of 29.7, i.e., 2.8 points of improvement. Model BLEU FULL 21.8 FULL - SCOPE 19.7 FULL - SCOPE - NE 19.5 FULL - SCOPE - NE - ANON 18.7 Table 4: BLEU scores for AMR generation ablations on preprocessing (DEV set). Model Prec Rec F1 FULL 54.9 60.0 57.4 FULL - ANON 22.7 54.2 32.0 Table 5: SMATCH scores for AMR parsing ablations on preprocessing (DEV set). theses for indicating scope, Figure 2(c), then without named entity clusters, Figure 2(b), and additionally without any anonymization, Figure 2(a). Tables 4 summarizes our evaluation on the AMR generation. Each components is required, and scope markers and anonymization contribute the most to overall performance. We suspect without scope markers our seq2seq models are not as effective at capturing long range semantic relationships between elements of the AMR graph. We also evaluated the contribution of anonymization to AMR parsing (Table 5). Following previous work, we find that seq2seq-based AMR parsing is largely ineffective without anonymization (Peng et al., 2017). 7 Linearization Evaluation In this section we evaluate three strategies for converting AMR graphs into sequences in the context of AMR generation and show that our models are largely agnostic to linearization orders. Our results argue, unlike SMT-based AMR generation methods (Pourdamghani et al., 2016), that seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences. 7.1 Linearization Orders All linearizations we consider use the pattern described in Section 4.2, but differ on the order in which children are visited. Each linearization generates anonymized, scope-marked output (see Section 4), of the form shown in Figure 2(d). Human The proposal traverses children in the order presented by human authored AMR annotations exactly as shown in Figure 2(d). 152 Linearization Order BLEU HUMAN 21.7 GLOBAL-RANDOM 20.8 RANDOM 20.3 Table 6: BLEU scores for AMR generation for different linearization orders (DEV set). Global-Random We construct a random global ordering of all edge types appearing in AMR graphs and re-use it for every example in the dataset. We traverse children based on the position in the global ordering of the edge leading to a child. Random For each example in the dataset we traverse children following a different random order of edge types. 7.2 Results We present AMR generation results for the three proposed linearization orders in Table 6. Random linearization order performs somewhat worse than traversing the graph according to Human linearization order. Surprisingly, a per example random linearization order performs nearly identically to a global random order, arguing seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences. Human-authored AMR leaks information The small difference between random and globalrandom linearizations argues that our models are largely agnostic to variation in linearization order. On the other hand, the model that follows the human order performs better, which leads us to suspect it carries extra information not apparent in the graphical structure of the AMR. To further investigate, we compared the relative ordering of edge pairs under the same parent to the relative position of children nodes derived from those edges in a sentence, as reported by JAMR alignments. We found that the majority of pairs of AMR edges (57.6%) always occurred in the same relative order, therefore revealing no extra generation order information.8 Of the examples corresponding to edge pairs that showed variation, 70.3% appeared in an order consistent with the order they were realized in the sentence. The relative ordering of some pairs of AMR edges was 8This is consistent with constraints encoded in the annotation tool used to collect AMR. For example, :ARG0 edges are always ordered before :ARG1 edges. Error Type % Coverage 29 Disfluency 23 Anonymization 14 Sparsity 13 Attachment 12 Other 10 Table 7: Error analysis for AMR generation on a sample of 50 examples from the development set. particularly indicative of generation order. For example, the relative ordering of edges with types location and time, was 17% more indicative of the generation order than the majority of generated locations before time.9 To compare to previous work we still report results using human orderings. However, we note that any practical application requiring a system to generate an AMR representation with the intention to realize it later on, e.g., a dialog agent, will need to be trained either using consistent, or randomderived linearization orders. Arguably, our models are agnostic to this choice. 8 Qualitative Results Figure 3 shows example outputs of our full system. The generated text for the first graph is nearly perfect with only a small grammatical error due to anonymization. The second example is more challenging, with a deep right-branching structure, and a coordination of the verbs stabilize and push in the subordinate clause headed by state. The model omits some information from the graph, namely the concepts terrorist and virus. In the third example there are greater parts of the graph that are missing, such as the whole sub-graph headed by expert. Also the model makes wrong attachment decisions in the last two sub-graphs (it is the evidence that is unimpeachable and irrefutable, and not the equipment), mostly due to insufficient annotation (thing) thus making their generation harder. Finally, Table 7 summarizes the proportions of error types we identified on 50 randomly selected examples from the development set. We found that the generator mostly suffers from coverage issues, 9Consider the sentences “She went to school in New York two years ago”, and “Two years ago, she went to school in New York”, where “two year ago” is the time modifying constituent for the verb went and “New York” is the location modifying constituent of went. 153 an inability to mention all tokens in the input, followed by fluency mistakes, as illustrated above. Attachment errors are less frequent, which supports our claim that the model is robust to graph linearization, and can successfully encode long range dependency information between concepts. 9 Conclusions We applied sequence-to-sequence models to the tasks of AMR parsing and AMR generation, by carefully preprocessing the graph representation and scaling our models via pretraining on millions of unlabeled sentences sourced from Gigaword corpus. Crucially, we avoid relying on resources such as knowledge bases and externally trained parsers. We achieve competitive results for the parsing task (SMATCH 62.1) and state-of-theart performance for generation (BLEU 33.8). For future work, we would like to extend our work to different meaning representations such as the Minimal Recursion Semantics (MRS; Copestake et al. (2005)). This formalism tackles certain linguistic phenomena differently from AMR (e.g., negation, and co-reference), contains explicit annotation on concepts for number, tense and case, and finally handles multiple languages10 (Bender, 2014). Taking a step further, we would like to apply our models on Semantics-Based Machine Translation using MRS as an intermediate representation between pairs of languages, and investigate the added benefit compared to directly translating the surface strings, especially in the case of distant language pairs such as English and Japanese (Siegel, 2000). Acknowledgments The research was supported in part by DARPA under the DEFT program through AFRL (FA875013-2-0019) and the CwC program through ARO (W911NF-15-1-0543), the ARO (W911NF-16-10121), the NSF (IIS-1252835, IIS-1562364, IIS1524371), an Allen Distinguished Investigator Award, and gifts by Google and Facebook. The authors thank Rik Koncel-Kedziorski and the UW NLP group for helpful discussions, and the anonymous reviewers for their thorough and helpful comments. 10A list of actively maintained languages can be found here: http://moin.delph-in.net/ GrammarCatalogue limit :arg0 ( treaty :arg0-of ( control :arg1 arms ) ) :arg1 ( number :arg1 ( weapon :mod conventional :arg1-of ( deploy :arg2 ( relative-pos :op1 loc_0 :dir west ) :arg1-of possible ) ) ) SYS: the arms control treaty limits the number of conventional weapons that can be deployed west of Ural Mountains . REF: the arms control treaty limits the number of conventional weapons that can be deployed west of the Ural Mountains . COMMENT: disfluency state :arg0 ( person :arg0-of ( have-org-role :arg1 ( committee :mod technical ) :arg3 ( expert :arg1 person :arg2 missile :mod loc_0 ) ) ) :arg1 ( evidence :arg0 equipment :arg1 ( plan :arg1 ( transfer :arg1 ( contrast :arg1 ( missile :mod ( just :polarity - ) ) :arg2 ( capable :arg1 thing :arg2 ( make :arg1 missile ) ) ) ) ) :mod ( impeach :polarity - :arg1 thing ) :mod ( refute :polarity - :arg1 thing ) ) SYS: a technical committee expert on the technical committee stated that the equipment is not impeach , but it is not refutes . REF: a technical committee of Indian missile experts stated that the equipment was unimpeachable and irrefutable evidence of a plan to transfer not just missiles but missile-making capability. COMMENT: coverage , disfluency, attachment state :arg0 report :arg1 ( obligate :arg1 ( government-organization :arg0-of ( govern :arg1 loc_0 ) ) :arg2 ( help :arg1 ( and :op1 ( stabilize :arg1 ( state :mod weak ) ) :op2 ( push :arg1 ( regulate :mod international :arg0-of ( stop :arg1 terrorist :arg2 ( use :arg1 ( information :arg2-of ( available :arg3-of free )) :arg2 ( and :op1 ( create :arg1 ( form :domain ( warfare :mod biology :example ( version :arg1-of modify :poss other_1 ) ) :mod new ) ) :op2 ( unleash :arg1 form ) ) ) ) ) ) ) ) ) REF: the report stated British government must help to stabilize weak states and push for international regulations that would stop terrorists using freely available information to create and unleash new forms of biological warfare such as a modified version of the influenza virus . COMMENT: coverage , disfluency, attachment SYS: the report stated that the Britain government must help stabilize the weak states and push international regulations to stop the use of freely available information to create a form of new biological warfare such as the modified version of the influenza . Figure 3: Linearized AMR after preprocessing, reference sentence, and output of the generator. We mark with colors common error types: disfluency, coverage (missing information from the input graph), and attachment (implying a semantic relation from the AMR between incorrect entities). 154 References Yoav Artzi, Kenton Lee, and Luke Zettlemoyer. 2015. Broad-coverage CCG semantic parsing with AMR. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 1699–1710. http://aclweb.org/anthology/D15-1198. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 2015 International Conference on Learning Representations. CBLS, San Diego, California. http://arxiv.org/abs/1409.0473. Guntis Barzdins and Didzis Gosko. 2016. RIGA at SemEval-2016 Task 8: Impact of Smatch extensions and character-level neural translation on AMR parsing accuracy. In Proceedings of the 10th International Workshop on Semantic Evaluation. Association for Computational Linguistics, San Diego, California, pages 1143–1147. http://www.aclweb.org/anthology/S16-1176. Emily M. Bender. 2014. Language CoLLAGE: Grammatical description with the LinGO grammar matrix. In Proceedings of the 9th International Conference on Language Resources and Evaluation. Reykjavik, Iceland, pages 2447–2451. Johannes Bjerva, Johan Bos, and Hessel Haagsma. 2016. The Meaning Factory at SemEval-2016 Task 8: Producing AMRs with Boxer. In Proceedings of the 10th International Workshop on Semantic Evaluation. Association for Computational Linguistics, San Diego, California, pages 1179–1184. http://www.aclweb.org/anthology/S16-1182. Lauritz Brandt, David Grimm, Mengfei Zhou, and Yannick Versley. 2016. ICL-HD at SemEval-2016 Task 8: Meaning representation parsing - augmenting AMR parsing with a preposition semantic role labeling neural network. In Proceedings of the 10th International Workshop on Semantic Evaluation. Association for Computational Linguistics, San Diego, California, pages 1160–1166. http://www.aclweb.org/anthology/S16-1179. Shu Cai and Kevin Knight. 2013. Smatch: an evaluation metric for semantic feature structures. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Sofia, Bulgaria, pages 748–752. http://www.aclweb.org/anthology/P132131. Ann Copestake, Dan Flickinger, Carl Pollard, and Ivan A. Sag. 2005. Minimal Recursion Semantics: An introduction. Research on Language and Computation 3(2):281–332. https://doi.org/10.1007/s11168-006-6327-9. Marco Damonte, Shay B. Cohen, and Giorgio Satta. 2017. An incremental parser for abstract meaning representation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, Valencia, Spain, pages 536– 546. http://www.aclweb.org/anthology/E17-1051. Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by Gibbs sampling. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics, Ann Arbor, Michigan, pages 363–370. https://doi.org/10.3115/1219840.1219885. Jeffrey Flanigan, Chris Dyer, Noah A. Smith, and Jaime Carbonell. 2016. Generation from abstract meaning representation using tree transducers. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, San Diego, California, pages 731–739. http://www.aclweb.org/anthology/N16-1087. Jeffrey Flanigan, Sam Thomson, Jaime Carbonell, Chris Dyer, and Noah A. Smith. 2014. A discriminative graph-based parser for the abstract meaning representation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Baltimore, Maryland, pages 1426–1436. http://www.aclweb.org/anthology/P14-1134. James Goodman, Andreas Vlachos, and Jason Naradowsky. 2016. UCL+Sheffield at SemEval-2016 Task 8: Imitation learning for AMR parsing with an alpha-bound. In Proceedings of the 10th International Workshop on Semantic Evaluation. Association for Computational Linguistics, San Diego, California, pages 1167–1172. http://www.aclweb.org/anthology/S16-1180. Lifu Huang, Taylor Cassidy, Xiaocheng Feng, Heng Ji, Clare R. Voss, Jiawei Han, and Avirup Sil. 2016. Liberal event extraction and event schema induction. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Berlin, Germany, pages 258–268. http://www.aclweb.org/anthology/P16-1025. Bevan Jones, Jacob Andreas, Daniel Bauer, Karl Moritz Hermann, and Kevin Knight. 2012. Semantics-Based Machine Translation with Hyperedge Replacement Grammars. In Proceedings of the 2012 International Conference on Computational Linguistics. Bombay, India, pages 1359–1376. http://www.aclweb.org/anthology/C12-1083. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine 155 translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Prague, Czech Republic, pages 177–180. http://dl.acm.org/citation.cfm?id=1557769.1557821. Fei Liu, Jeffrey Flanigan, Sam Thomson, Norman Sadeh, and Noah A. Smith. 2015. Toward abstractive summarization using semantic representations. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, Denver, Colorado, pages 1077–1086. http://www.aclweb.org/anthology/N15-1114. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 1412– 1421. http://aclweb.org/anthology/D15-1166. Dipendra Kumar Misra and Yoav Artzi. 2016. Neural shift-reduce CCG semantic parsing. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 1775–1786. https://aclweb.org/anthology/D161183. Courtney Napoles, Matthew Gormley, and Benjamin Van Durme. 2012. Annotated Gigaword. In Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction. Association for Computational Linguistics, Montr´eal, Canada, pages 95–100. http://www.aclweb.org/anthology/W12-3018. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An annotated corpus of semantic roles. Computational Linguistics 31(1):71–106. http://www.cs.rochester.edu/ gildea/palmerpropbank-cl.pdf. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Philadelphia, Pennsylvania, USA, pages 311–318. https://doi.org/10.3115/1073083.1073135. Xiaochang Peng, Chuan Wang, Daniel Gildea, and Nianwen Xue. 2017. Addressing the data sparsity issue in neural AMR parsing. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, Valencia, Spain, pages 366–375. http://www.aclweb.org/anthology/E17-1035. Nima Pourdamghani, Yang Gao, Ulf Hermjakob, and Kevin Knight. 2014. Aligning English strings with abstract meaning representation graphs. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Doha, Qatar, pages 425– 429. http://www.aclweb.org/anthology/D14-1048. Nima Pourdamghani, Kevin Knight, and Ulf Hermjakob. 2016. Generating English from abstract meaning representations. In Proceedings of the 9th International Natural Language Generation conference. Association for Computational Linguistics, Edinburgh, UK, pages 21–25. http://anthology.aclweb.org/W16-6603. Michael Pust, Ulf Hermjakob, Kevin Knight, Daniel Marcu, and Jonathan May. 2015. Parsing english into abstract meaning representation using syntaxbased machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 1143– 1154. https://aclweb.org/anthology/D/D15/D151136. Yevgeniy Puzikov, Daisuke Kawahara, and Sadao Kurohashi. 2016. M2L at SemEval-2016 Task 8: AMR parsing with neural networks. In Proceedings of the 10th International Workshop on Semantic Evaluation. Association for Computational Linguistics, San Diego, California, pages 1154–1159. http://www.aclweb.org/anthology/S16-1178. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Berlin, Germany, pages 86–96. http://www.aclweb.org/anthology/P16-1009. Melanie Siegel. 2000. HPSG Analysis of Japanese, Springer Berlin Heidelberg, pages 264–279. Linfeng Song, Yue Zhang, Xiaochang Peng, Zhiguo Wang, and Daniel Gildea. 2016. AMR-to-text generation as a traveling salesman problem. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 2084–2089. https://aclweb.org/anthology/D161224. Sho Takase, Jun Suzuki, Naoaki Okazaki, Tsutomu Hirao, and Masaaki Nagata. 2016. Neural headline generation on abstract meaning representation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 1054–1059. https://aclweb.org/anthology/D16-1112. Oriol Vinyals, Ł ukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In Proceedings of the 156 28th International Conference on Neural Information Processing Systems, MIT Press, pages 2773– 2781. http://papers.nips.cc/paper/5635-grammaras-a-foreign-language.pdf. Chuan Wang, Sameer Pradhan, Xiaoman Pan, Heng Ji, and Nianwen Xue. 2016. CAMR at SemEval-2016 Task 8: An extended transition-based AMR parser. In Proceedings of the 10th International Workshop on Semantic Evaluation. Association for Computational Linguistics, San Diego, California, pages 1173–1178. http://www.aclweb.org/anthology/S161181. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR abs/1609.08144. http://arxiv.org/abs/1609.08144. Junsheng Zhou, Feiyu Xu, Hans Uszkoreit, Weiguang QU, Ran Li, and Yanhui Gu. 2016. AMR parsing with an incremental joint model. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 680–689. https://aclweb.org/anthology/D16-1065. 157
2017
14
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1524–1534 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1140 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1524–1534 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1140 Incorporating Word Reordering Knowledge into Attention-based Neural Machine Translation Jinchao Zhang1 Mingxuan Wang1 Qun Liu3,1 Jie Zhou2 1Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences {zhangjinchao,wangmingxuan,liuqun}@ict.ac.cn 2Baidu Research - Institute of Deep Learning, Baidu Inc.,Beijing,China {zhoujie01}@baidu.com 3ADAPT Centre, School of Computing, Dublin City University Abstract This paper proposes three distortion models to explicitly incorporate the word reordering knowledge into attention-based Neural Machine Translation (NMT) for further improving translation performance. Our proposed models enable attention mechanism to attend to source words regarding both the semantic requirement and the word reordering penalty. Experiments on Chinese-English translation show that the approaches can improve word alignment quality and achieve significant translation improvements over a basic attention-based NMT by large margins. Compared with previous works on identical corpora, our system achieves the state-of-the-art performance on translation quality. 1 Introduction Word reordering model is one of the most crucial sub-components in Statistical Machine Translation (SMT) (Brown et al., 1993; Koehn et al., 2003; Chiang, 2005) which provides word reordering knowledge to ensure reasonable translation order of source words. It is separately trained and then incorporated into the SMT framework in a pipeline style. In recent years, end-to-end NMT (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2015) has made tremendous progress (Jean et al., 2015; Luong et al., 2015b; Shen et al., 2016; Sennrich et al., 2016; Tu et al., 2016; Zhou et al., 2016; Johnson et al., 2016). An encoder-decoder framework (Cho et al., 2014b; Sutskever et al., 2014) with attention mechanism (Bahdanau et al., 2015) is widely used, in which an encoder compresses the source sentence, an attention mechanism evaluates related source words and a decoder generates target words. The attention mechanism evaluates the distribution of to-be-translated source words in a content-based addressing fashion (Graves et al., 2014) which tends to attend to the source words regarding the content relation with current translation status. Lack of explicit models to exploit the word reordering knowledge may lead to attention faults and generate fluent but inaccurate or inadequate translations. Table 1 shows a translation instance and Figure 1 depicts the corresponding word alignment matrix that produced by the attention mechanism. In this example, even though the word “zuixin (latest)” is a common adjective in Chinese and its following word should be translated soon in Chinese to English translation direction, the word “yiju (evidence)” does not obtain appropriate attention which leads to the incorrect translation. src youguan(related) baodao(report) shi(is) zhichi(support) tamen(their) lundian(arguments) de(’s) zuixin(latest) yiju(evidence) . ref the report is the latest evidence that supports their arguments . NMT the report supports their perception of the latest . count zuixin yiju {0} Table 1: An instance in Chinese-English translation task. The row “count” represents the frequency of the word collocation in the training corpus. The collocation “zuixin yiju” does not appear in the training data. 1524 Figure 1: The source word “yiju” does not obtain appropriate attention and its word sense is completely neglected. To enhance the attention mechanism, implicit word reordering knowledge needs to be incorporated into attention-based NMT. In this paper, we introduce three distortion models that originated from SMT (Brown et al., 1993; Koehn et al., 2003; Och et al., 2004; Tillmann, 2004; Al-Onaizan and Papineni, 2006), so as to model the word reordering knowledge as the probability distribution of the relative jump distances between the newly translated source word and the to-be-translated source word. Our focus is to extend the attention mechanism to attend to source words regarding both the semantic requirement and the word reordering penalty. Our models have three merits: 1. Extended word reordering knowledge. Our models capture explicit word reordering knowledge to guide the attending process for attention mechanism. 2. Convenient to be incorporated into attention-based NMT. Our distortion models are differentiable and can be trained in the end-to-end style. The interpolation approach ensures that the proposed models can coordinately work with the original attention mechanism. 3. Flexible to utilize variant context for computing the word reordering penalty. In this paper, we exploit three categories of information as distortion context conditions to compute the word reordering penalty, but variant context information can be utilized due to our model’s flexibility. We validate our models on the ChineseEnglish translation task and achieve notable improvements: • On 16K vocabularies, NMT models are usually inferior in comparison with the phrase-based SMT, but our model surpasses phrase-based Moses by average 4.43 BLEU points and outperforms the attention-based NMT baseline system by 5.09 BLEU points. • On 30K vocabularies, the improvements over the phrase-based Moses and the attention-based NMT baseline system are average 6.06 and 1.57 BLEU points respectively. • Compared with previous work on identical corpora, we achieve the state-of-theart translation performance on average. The word alignment quality evaluation shows that our model can effectively improve the word alignment quality that is crucial for improving translation quality. 2 Background We aim to capture word reordering knowledge for the attention-based NMT by incorporating distortion models. This section briefly introduces attention-based NMT and distortion models in SMT. 2.1 Attention-based Neural Machine Translation Formally, given a source sentence x = x1, ..., xm and a target sentence y = y1, ..., yn, NMT models the translation probability as P(y|x) = n ∏ t=1 P(yt|y<t, x), (1) where y<t = y1, ..., yt−1. The generation probability of yt is P(yt|y<t, x) = g(yt−1, ct, st), (2) where g(·) is a softmax regression function, yt−1 is the newly translated target word and 1525 Encoder h1 h2 hm Decoder ... x1 x2 xm ... ... Attention with Distortion ... <s> y1 yn-1 ... Ct Softmax y1 y2 yn ... 1 ~  tS Ψ 1  tS 1  t y Figure 2: The general architecture of our proposed models. The dash line represents variant context can be utilized to determine the word reordering penalty. st is the hidden states of decoder which represents the translation status. The attention ct denotes the related source words for generating yt and is computed as the weighted-sum of source representation h upon an alignment vector αt shown in Eq.(3) where the align(·) function is a feedforward network with softmax normalization. ct = m ∑ j=1 αt,jhj αt,j = align(st, hj) (3) The hidden states st is updated as st = f(st−1, yt−1, ct), (4) where f(·) is a recurrent function. We adopt a varietal attention mechanism1 in our in-house RNNsearch model which is implemented as est = f1(st−1, yt−1), αt,j = align(est, hj), st = f2(est, ct), (5) where f1(·) and f2(·) are recurrent functions. As shown in Eq.(3), the attention mechanism attends to source words in a contentbased addressing way without considering any explicit word reordering knowledge. We introduce distortion models to capture explicit word reordering knowledge for enhancing the attention mechanism and improving translation quality. 1https://github.com/nyu-dl/dl4mttutorial/tree/master/session2 2.2 Distortion Models in SMT In SMT, distortion models are linearly combined with other features, as follows, y∗= arg max y exp[λdd(x, y, b)+ R−1 ∑ r=1 λrhr(x, y, b)], (6) where d(·) is the distortion feature, hr(·) represents other features, λd and λr are the weights, b is the latent variable that represents translation knowledge and R is the number of features. IBM Models (Brown et al., 1993) depicted the word reordering knowledge as positional relations between source and target words. Koehn et al. (2003) proposed a distortion model for phrase-based SMT based on jump distances between the newly translated phrases and to-be-translated phrases which does not consider specific lexical information. Och et al. (2004) and Tillmann (2004) proposed orientation-based distortion models that consider translation orientations. Yaser and Papineni (2006) proposed a distortion model to estimate probability distribution on possible relative jumps conditioned on source words. These models are proposed for SMT and separately trained as sub-components. Inspired by these previous work, we introduce the distortion models into NMT model for modeling the word reordering knowledge. Our proposed models are designed for NMT which can be trained in the end-to-end style. 3 Distortion Models for attention-based NMT The basic idea of our proposed distortion models is to estimate the probability distribution of the possible relative jump distances between the newly translated source word and the tobe-translated source word upon the context condition. Figure 2 shows the general architecture of our proposed model. 3.1 General Architecture We employ an interpolation approach to incorporate distortion models into attention-based NMT as αt = λ · dt + (1 −λ)ˆαt, (7) 1526 Figure 3: Illustration of shift actions of the alignment vector αt−1. If αt is the left shift of αt−1, it represents the translation orientation of the source sentence is backward and if αt is the right shift of αt−1, the translation orientation is forward. where αt is the ultimate alignment vector for computing the related source context ct, dt is the alignment vector calculated by the distortion model, ˆαt is the alignment vector computed by the basic attention mechanism and λ is a hyper-parameter to control the weight of the distortion model. In the proposed distortion model, relative jumps on source words are depicted as the “shift” actions of the alignment vector αt−1 which is shown in the Figure 3. The right shift of αt−1 indicates that the translation orientation of source words is forward and the left shift represents that the translation orientation is backward. The extent of a shift action measures the word reordering distance. Alignment vector dt, which is produced by the distortion model, is the expectation of all possible shifts of αt−1 conditioned on certain context. Formally, the proposed distortion model is dt = E[Γ(αt−1)] = l ∑ k=−l P(k|Ψ) · Γ(αt−1, k), (8) where k ∈[−l, l] is the possible relative jump distance, l is the window size parameter and P(k|Ψ) stands for the probability of jump distance k that conditioned on the context Ψ. Function Γ(·) for shifting the alignment vector is defined as Γ(αt−1, k) =      {αt−1,−k, ..., αt−1,m, 0, ..., 0}, k<0 αt−1, k= 0 {0, ..., 0, αt−1,1, ..., αt−1,m−k}, k>0 (9) which can be implemented as matrix multiplication computations. We respectively exploit source context, target context and translation status context (hidden states of decoder) as Ψ and derive three distortion models: Source-based Distortion (S-Distortion) model , Targetbased Distortion (T-Distortion) model and Translation-status-based Distortion (H-Distortion) model. Our framework is capable of utilizing arbitrary context as the condition Ψ to predict the relative jump distances. 3.2 S-Distortion model S-Distortion model adopts previous source context ct−1 as the context Ψ with the intuition that certain source word indicate certain jump distance. The to-be-translated source word have intense positional relations with the newly translated one. The underlying linguistic intuition is that synchronous grammars (Yamada and Knight, 2001; Galley et al., 2004) can be extracted from language pairs. Word categories such as verb, adjective and preposition carry general word reordering knowledge and words carry specific word reordering knowledge. To further illustrate this idea, we present some common synchronous grammar rules that can be extracted from the example in Table 1 as follows, NP −→JJ NN | JJ NN JJ −→zuixin | latest. (10) From the above grammar, we can conjecture the speculation that after the word ”zuixin(latest)” is translated, the translation orientation is forward with shift distance 1. The probability function in S-Distortion model is defined as follows, P(·|Ψ) = z(ct−1) = softmax(Wcct−1 + bc), (11) where Wc ∈R(2l+1)×dim(ct−1) and bc ∈R2l+1 are weight matrix and bias parameters. 3.3 T-Distortion Model T-Distortion model exploits the embedding of the previous generated target word yt−1 as the context condition to predict the probability distribution of distortion distances. It focuses on the word reordering knowledge upon target 1527 word context. As illustrated in Eq.(10), the target word “latest” possesses word reordering knowledge that is identical with source word “zuixin”. The probability function in T-Distortion model is defined as follows, P(·|Ψ) = z(yt−1) = softmax(Wyemb(yt−1) + by), (12) where emb(yt−1) is the embedding of yt−1, Wy ∈R(2l+1)×dim(emb(yt−1)) and by ∈R2l+1 are weight matrix and bias parameters. 3.4 H-Distortion Model The hidden states ˜st−1 reflect the translation status and contains both source context and target context information. Therefore, we exploit ˜st−1 as context Ψ in the H-Distortion model to predict shift distances. The probability function in H-Distortion model is defined as follows, P(·|Ψ) = z(˜st−1) = softmax(Ws˜st−1 + bs) (13) where Ws ∈R(2l+1)×dim(˜st−1) and bs ∈R2l+1 are the weight matrix and bias parameters. 4 Experiments We carry the translation task on the ChineseEnglish direction to evaluate the effectiveness of our models. To investigate the word alignment quality, we take the word alignment quality evaluation on the manually aligned corpus. We also conduct the experiments to observe effects of hyper-parameters and the training strategies. 4.1 Data and Metrics Data: Our Chinese-English training corpus consists of 1.25M sentence pairs extracted from LDC corpora2 with 27.9M Chinese words and 34.5M English words respectively. 16K vocabularies cover approximately 95.8% and 98.3% words and 30K vocabularies cover approximately 97.7% and 99.3% words in Chinese and English respectively. We choose NIST 2002 dataset as the validation set. NIST 2The corpora includes LDC2002E18, LDC2003E07, LDC2003E14, Hansards portion of LDC2004T07, LDC2004T08 and LDC2005T06. 2003-2006 are used as test sets. To assess the word alignment quality, we employ Tsinghua dataset (Liu and Sun, 2015) which contains 900 manually aligned sentence pairs. Metrics: The translation quality evaluation metric is the case-insensitive 4-gram BLEU3 (Papineni et al., 2002). Sign-test (Collins et al., 2005) is exploited for statistical significance test. Alignment error rate (AER) (Och and Ney, 2003) is calculated to assess the word alignment quality. 4.2 Comparison Systems We compare our approaches with three baseline systems: Moses (Koehn et al., 2007): An open source phrase-based SMT system with default settings. Words are aligned with GIZA++ (Och and Ney, 2003). The 4-gram language model with modified Kneser-Ney smoothing is trained on the target portion of training data by SRILM (Stolcke et al., 2002). Groundhog4: An open source attentionbased NMT system with default settings. RNNsearch∗: Our in-house implementation of NMT system with the varietal attention mechanism and other settings that presented in section 4.3. 4.3 Training Hyper parameters: The sentence length for training NMTs is up to 50, while SMT model exploits whole training data without any restrictions. Following Bahdanau et al. (2015), we use bi-directional Gated Recurrent Unit (GRU) as the encoder. The forward representation and the backward representation are concatenated at the corresponding position as the ultimate representation of a source word. The word embedding dimension is set to 620 and the hidden layer size is 1000. The interpolation parameter λ is 0.5 and the window size l is set to 3. Training details: Square matrices are initialized in a random orthogonal way. Non-square matrices are initialized by sampling each element from the 3ftp://jaguar.ncsl.nist.gov/mt/resources/ mteval-v11b.pl 4https://github.com/lisa-groundhog/ GroundHog 1528 Systems MT03 MT04 MT05 MT06 Average Average Increase Moses 31.61 33.48 30.75 30.85 31.67 − Groundhog(16K) 29.14 31.23 28.11 27.77 29.06 − RNNsearch∗(16K) 30.77 33.92 30.82 28.56 31.02 − + T-Distortion 35.71‡ 37.81‡ 33.78‡ 33.79‡ 35.27 +(4.26, 3.60, 6.21) + S-Distortion 36.58‡ 38.47 ‡ 34.85‡ 33.86‡ 35.94 +(4.92, 4.27, 6.88) + H-Distortion 35.95‡ 38.77‡ 35.33‡ 34.36‡ 36.10 +(5.09, 4.43, 7.04) Groundhog(30K) 31.92 34.09 31.56 31.12 32.17 − RNNsearch∗(30K) 36.47 39.17 35.04 33.97 36.16 − + T-Distortion 37.93† 40.40‡ 36.81‡ 35.77‡ 37.73 +(1.57, 6.06, 5.56) + S-Distortion 37.47† 40.52‡ 36.16‡ 35.32 37.37 +(1.21, 5.70, 5.20) + H-Distortion 38.33‡ 40.11‡ 36.71† 35.29‡ 37.61 +(1.45, 5.94, 5.44) Table 2: BLEU-4 scores (%) on NIST test set 03-06 of Moses (default settings), Groundhog (default settings), RNNsearch∗and RNNsearch∗with distortion models respectively. The values in brackets are increases on RNNsearch∗, Moses and Groundhog respectively. ‡ indicates statistical significant difference (p<0.01) from RNNsearch∗and † means statistical significant difference (p<0.05) from RNNsearch∗. Gaussian distribution with mean 0 and variance 0.012. All bias are initialized to 0. Parameters are updated by Mini-batch Gradient Descent and the learning rate is controlled by the AdaDelta (Zeiler, 2012) algorithm with decay constant ρ = 0.95 and denominator constant ϵ = 1e −6. The batch size is 80. Dropout strategy (Srivastava et al., 2014) is applied to the output layer with the dropout rate 0.5 to avoid over-fitting. The gradients of the cost function which have L2 norm larger than a predefined threshold 1.0 is normalized to the threshold to avoid gradients explosion (Pascanu et al., 2013). We exploit length normalization (Cho et al., 2014a) on candidate translations and the beam size for decoding is 12. For NMT with distortion models, we use trained RNNsearch∗model to initialize parameters except for those related to distortions. 4.4 Results The translation quality experiment results are shown in Table 2. We carry the experiments on different vocabulary sizes for that different vocabulary sizes cause different degrees of the rare word collocations. Through this way, we can validate the effects of our proposed models in alleviating the rare word collocations problem that leads to incorrect word alignments. On 16K vocabularies: The phrase-based Moses performs better than the basic NMTs including Groundhog and RNNsearch∗. Besides the differences between model architectures, restricted vocabularies and sentence length also affect the performance of NMTs. However, RNNsearch∗with distortion models surpass phrase-based Moses by average 3.60, 4.27 and 4.43 BLEU points. RNNsearch∗outperforms Groundhog by average 1.96 BLEU points due to the varietal attention mechanism, length normalization and dropout strategies. Distortion models bring about remarkable improvements as 4.26, 4.92 and 5.09 BLEU points over the RNNsearch∗model. On 30K vocabularies: RNNsearch∗with distortion models yield average gains by 1.57, 1.21 and 1.45 BLEU points over RNNsearch∗ and outperform phrase-based Moses by average 6.06, 5.70 and 5.94 BLEU points and surpass GroundHog by average 5.56, 5.20 and 5.44 BLEU points. RNNsearch∗(16K) with distortion models achieve close performances with RNNsearch∗(30K). The improvements on 16K vocabularies are larger than that on 30K vocabularies for the intuition that more ”UNK” words lead to more rare word collocations, which results in serious attention ambiguities. The RNNsearch∗with distortion models yield tremendous improvements on BLEU scores proves the effectiveness of proposed approaches in improving translation quality. Comparison with previous work: We present the performance comparison with pre1529 System Length MT03 MT04 MT05 MT06 Average Coverage 80 32.73 32.47 MEMDEC 50 36.16 39.81 35.91 35.98 36.95 NMTIA 80 35.69 39.24 35.74 35.10 36.44 Our work 50 37.93 40.40 36.81 35.77 37.73 Table 3: Comparison with previous work on identical training corpora. Coverage (Tu et al., 2016) is a basic RNNsearch model with a coverage model to alleviate the over-translation and under-translation problems. MEMDEC (Wang et al., 2016) is to improve translation quality with external memory. NMTIA (Meng et al., 2016) exploits a readable and writable attention mechanism to keep track of interactive history in decoding. Our work is NMT with H-Distortion model. The vocabulary sizes of all work are 30K and maximum lengths of sentence differ. (a) (b) Figure 4: (a) is the output of the distortion model and is calculated on shift actions of previous alignment vector. (b) is the ultimate word alignment matrix of attention-based NMT with H-Distortion model. Compared with Figure 1, (b) is more centralized and accurate. Systems BLEU AER RNNsearch∗(30K) 20.90 49.73 + T-Distortion 24.33‡ 46.92 + S-Distortion 24.10‡ 47.37 + H-Distortion 24.42‡ 47.05 Table 4: BLEU-4 scores (%) and AER scores on Tsinghua manually aligned Chinese-English evaluation set. The lower the AER score, the better the alignment quality. vious work that employ identical training corpora in Table 3. Our work evidently outperforms previous work on average performance. Although we restrict the maximum length of sentence to 50, our model achieves the stateof-the-art BLEU scores on almost all test sets except NIST2006. 4.5 Analysis We investigate the effects on the alignment quality of our models and conduct the experiments to evaluate the influence of the hyperparameter settings and the training strategies. 4.5.1 Alignment Quality Distortion models concentrate on attending to to-be-translated words based on the word reordering knowledge and can intuitively enhance the word alignment quality. To investigate the effect on word alignment quality, we apply the BLEU and AER evaluations on Tsinghua manually aligned data set. 1530 (a) (b) Figure 5: Translation performance on the test sets with respect to the hyper-parameter λ and l. System MT03 MT04 MT05 MT06 Average Pre-training 35.95 38.77 35.33 34.36 36.10 No pre-training 36.99 38.42 34.56 34.01 36.00 Table 5: Comparison between pre-training and no pre-training H-Distortion model. The performances are consistent. Table 4 lists the BLEU and AER scores of Chinese-English translation with 30K vocabulary. RNNsearch*(30K) with distortion models achieve significant improvements on BLEU scores and obvious decrease on AER scores. The results shows that the proposed model can effectively improve the word alignment quality Figure 4 shows the output of distortion model and ultimate alignment matrix of the above-mentioned instance. Compared with Figure 1, the alignment matrix produced by NMT with distortion models is more concentrated and accurate. The output of distortion model shows its capacity of modeling word reordering knowledge. 4.5.2 Effect of Hyper-parameters To investigate the effect of the weight hyperparameter λ and window hyper-parameter l in the proposed model, we carry experiments on H-Distortion model with variable hyperparameter settings. We fix l = 3 for exploring the effect of λ and fix λ = 0.5 for observing the effect of l. Figure 5 presents the translation performances with respect to hyperparameters. With the increase of weight λ, the BLEU scores first rise and then drop, which shows the distortion model provides additional helpful information while can not fully cover the attention mechanism for its insufficient content searching ability. For window l, the experiments show that larger windows bring slight further improvements, which indicates that distortion model pays more attention to the short-distance reordering knowledge. 4.5.3 Pre-training VS No Pre-training We conduct the experiment without using pretraining strategy to observe the effect of the initialization. As is shown in Table 5, the no-pre-training model achieves consistent improvements with the pre-training one which verifies the stable effectiveness of our approach. Initialization with pre-training strategy provides a fast approach to obtain the model for it needs fewer training iterations. 5 Related Work Our work is inspired by the distortion models that widely used in SMT. The most related work in SMT is the distortion model proposed by Yaser and Papineni (2006). Their model is identical to our S-Distortion model that captures the relative jump distance knowledge on source words. However, our approach is deliberately designed for the attention-based NMT system and is capable of exploiting variant context information to predict the relative jump distances. Our work is related to the work (Luong et al., 2015a; Feng et al., 2016; Tu et al., 2016; 1531 Cohn et al., 2016; Meng et al., 2016; Wang et al., 2016) that concentrate on the improvement of the attention mechanism. To remit the computing cost of the attention mechanism when dealing with long sentences, Luong et al. (2015a) proposed the local attention mechanism by just focusing on a subscope of source positions. Cohn et al. (2016) incorporated structural alignment biases into the attention mechanism and obtained improvements across several challenging language pairs in low-resource settings. Feng et al. (2016) passed the previous attention context to the attention mechanism by adding recurrent connections as the implicit distortion model. Tu et al. (2016) maintained a coverage vector for keeping the attention history to acquire accurate translations. Meng et al. (2016) proposed the interactive attention with the attentive read and attentive write operation to keep track of the interaction history. Wang et al. (2016) utilized an external memory to store additional information for guiding the attention computation. These works are different from ours, as our distortion models explicitly capture word reordering knowledge through estimating the probability distribution of relative jump distances on source words to incorporate word reordering knowledge into the attention-based NMT. 6 Conclusions We have presented three distortion models to enhance attention-based NMT through incorporating the word reordering knowledge. The basic idea of proposed distortion models is to enable the attention mechanism to attend to the source words regarding both semantic requirement and the word reordering penalty. Experiments show that our models can evidently improve the word alignment quality and translation performance. Compared with previous work on identical corpora, our model achieves the state-of-the-art performance on average. Our model is convenient to be applied in the attention-based NMT and can be trained in the end-to-end style. We also investigated the effect of hyper-parameters and pre-training strategy and further proved the stable effectiveness of our model. In the future, we plan to validate the effectiveness of our model on more language pairs. 7 Acknowledgement Qun Liu’s work is partially supported by Science Foundation Ireland in the ADAPT Centre for Digital Content Technology (www.adaptcentre.ie) at Dublin City University funded under the SFI Research Centres Programme (Grant 13/RC/2106) co-funded under the European Regional Development Fund. We are grateful to Qiuye Zhao, Fandong Meng and Daqi Zheng for their helpful suggestions. We thank the anonymous reviewers for their insightful comments. References Yaser Al-Onaizan and Kishore Papineni. 2006. Distortion models for statistical machine translation. In Proceedings of ACL2006. pages 529–536. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR2015. Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation:parameter estimation. Computational Linguistics 19(2):263––311. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of ACL2005. pages 263–270. Kyunghyun Cho, Bart Van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014a. On the properties of neural machine translation: Encoder–decoder approaches. In Eighth Workshop on Syntax,Semantics and Structure in Statistical Translation. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014b. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of EMNLP 2014. Doha, Qatar, pages 1724–1734. Trevor Cohn, Cong Duy Vu Hoang, Ekaterina Vymolova, Kaisheng Yao, Chris Dyer, and Gholamreza Haffari. 2016. Incorporating structural alignment biases into an attentional neural translation model. In Proceedings of NAACL2016. pages 876––885. 1532 Michael Collins, Philipp Koehn, and Ivona Kučerová. 2005. Clause restructuring for statistical machine translation. In Proceedings of ACL2005. pages 531–540. Shi Feng, Shu jie Liu, Mu Li, and Ming Zhou. 2016. Implicit distortion and fertility models for attention-based encoder-decoder nmt model. arXiv preprint arXiv:1601.03317 . Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a translation rule. In Proceedings of HLT/NAACL. Boston, volume 4, pages 273–280. Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. arXiv preprint arXiv:1410.5401 . Sébastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural machine translation. In Proceedings of ACL2014. volume 1, pages 1–10. Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, and Greg Corrado. 2016. Google’s multilingual neural machine translation system: Enabling zero-shot translation. In arXiv preprint arXiv:1609.08144. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of EMNLP2013. Seattle, Washington, USA, pages 1700–1709. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the ACL2007 Demo and Poster Sessions. Prague, Czech Republic, pages 177–180. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings NAACL2003. pages 48–54. Yang Liu and Maosong Sun. 2015. Contrastive unsupervised word alignment with non-local features. In Proceedings of AAAI2015. pages 2295–2301. Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015a. Effective approaches to attention-based neural machine translation. In Proceedings of EMNLP2015. Lisbon, Portugal. Minh Thang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, and Wojciech Zaremba. 2015b. Addressing the rare word problem in neural machine translation. Proceedings of ACL2015 27(2):82–86. Fandong Meng, Zhengdong Lu, Hang Li, and Qun Liu. 2016. Interactive attention for neural machine translation. In Proceedings of COLING2016. Franz Josef Och, Daniel Gildea, Sanjeev Khudanpur, Anoop Sarkar, Kenji Yamada, Alexander M Fraser, Shankar Kumar, Libin Shen, David Smith, Katherine Eng, et al. 2004. A smorgasbord of features for statistical machine translation. In HLT-NAACL. pages 161–168. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational linguistics 29(1):19–51. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of ACL2002. Association for Computational Linguistics, pages 311–318. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. ICML (3) 28:1310–1318. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of ACL2016. pages 1715–1725. Shiqi Shen, Yong Cheng, Zhongjun He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum risk training for neural machine translation. In Proceedings of ACL2016. pages 1683–1692. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15(1):1929–1958. Andreas Stolcke et al. 2002. Srilm-an extensible language modeling toolkit. In Proceedings of the international conference on spoken language processing. volume 2, pages 901–904. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of NIPS2014. Christoph Tillmann. 2004. A unigram orientation model for statistical machine translation. In Proceedings of HLT-NAACL 2004: Short Papers. pages 101–104. Zhaopeng Tu, Zhengdong Lu, yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In Proceedings of ACL. pages 76–85. 1533 Mingxuan Wang, Zhengdong Lu, Hang Li, and Qun Liu. 2016. Memory-enhanced decoder for neural machine translation. In Proceedings of EMNLP2016. Kenji Yamada and Kevin Knight. 2001. A syntaxbased statistical translation model. In Proceedings of ACL2001. pages 523–530. Al-Onaizan Yaser and Kishore Papineni. 2006. Distortion models for statistical machine translation. In Proceedings of ACL2006. pages 529–536. Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 . Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. 2016. Deep recurrent models with fastforward connections for neural machine translation. In Proceedings of EMNLP2016. 1534
2017
140
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1535–1546 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1141 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1535–1546 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1141 Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search Chris Hokamp ADAPT Centre Dublin City University [email protected] Qun Liu ADAPT Centre Dublin City University [email protected] Abstract We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence ˆy = {y0 . . . yT }, by maximizing p(y|x) = Q t p(yt|x; {y0 . . . yt−1}). Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model’s output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios. 1 Introduction The output of many natural language processing models is a sequence of text. Examples include automatic summarization (Rush et al., 2015), machine translation (Koehn, 2010; Bahdanau et al., 2014), caption generation (Xu et al., 2015), and dialog generation (Serban et al., 2016), among others. In some real-world scenarios, additional information that could inform the search for the optimal output sequence may be available at inference time. Humans can provide corrections after viewing a system’s initial output, or separate classification models may be able to predict parts of the output with high confidence. When the domain of the input is known, a domain terminology may be employed to ensure specific phrases are present in a system’s predictions. Our goal in this work is to find a way to force the output of a model to contain such lexical constraints, while still taking advantage of the distribution learned from training data. For Machine Translation (MT) usecases in particular, final translations are often produced by combining automatically translated output with user inputs. Examples include Post-Editing (PE) (Koehn, 2009; Specia, 2011) and InteractivePredictive MT (Foster, 2002; Barrachina et al., 2009; Green, 2014). These interactive scenarios can be unified by considering user inputs to be lexical constraints which guide the search for the optimal output sequence. In this paper, we formalize the notion of lexical constraints, and propose a decoding algorithm which allows the specification of subsequences that are required to be present in a model’s output. Individual constraints may be single tokens or multi-word phrases, and any number of constraints may be specified simultaneously. Although we focus upon interactive applications for MT in our experiments, lexically constrained decoding is relevant to any scenario where a model is asked to generate a sequence ˆy = {y0 . . . yT } given both an input x, and a set {c0...cn}, where each ci is a sub-sequence {ci0 . . . cij}, that must appear somewhere in ˆy. This makes our work applicable to a wide range of text generation scenarios, including image description, dialog generation, abstractive summarization, and question answering. The rest of this paper is organized as follows: Section 2 gives the necessary background for our 1535 Figure 1: A visualization of the decoding process for an actual example from our English-German MT experiments. The output token at each timestep appears at the top of the figure, with lexical constraints enclosed in boxes. Generation is shown in blue, Starting new constraints in green, and Continuing constraints in red. The function used to create the hypothesis at each timestep is written at the bottom. Each box in the grid represents a beam; a colored strip inside a beam represents an individual hypothesis in the beam’s k-best stack. Hypotheses with circles inside them are closed, all other hypotheses are open. (Best viewed in colour). discussion of GBS, Section 3 discusses the lexically constrained decoding algorithm in detail, Section 4 presents our experiments, and Section 5 gives an overview of closely related work. 2 Background: Beam Search for Sequence Generation Under a model parameterized by θ, let the best output sequence ˆy given input x be Eq. 1. ˆy = argmax y∈{y[T]} pθ(y|x), (1) where we use {y[T]} to denote the set of all sequences of length T. Because the number of possible sequences for such a model is |v|T , where |v| is the number of output symbols, the search for ˆy can be made more tractable by factorizing pθ(y|x) into Eq. 2: pθ(y|x) = T Y t=0 pθ(yt|x; {y0 . . . yt−1}). (2) The standard approach is thus to generate the output sequence from beginning to end, conditioning the output at each timestep upon the input x, and the already-generated symbols {y0 . . . yi−t}. However, greedy selection of the most probable output at each timestep, i.e.: ˆyt = argmax yi∈{v} p(yi|x; {y0 . . . yt−1}), (3) risks making locally optimal decisions which are actually globally sub-optimal. On the other hand, an exhaustive exploration of the output space would require scoring |v|T sequences, which is intractable for most real-world models. Thus, a search or decoding algorithm is often used as a compromise between these two extremes. A common solution is to use a heuristic search to attempt to find the best output efficiently (Pearl, 1984; Koehn, 2010; Rush et al., 2013). The key idea is to discard bad options early, while trying to avoid discarding candidates that may be locally risky, but could eventually result in the best overall output. Beam search (Och and Ney, 2004) is probably the most popular search algorithm for decoding sequences. Beam search is simple to implement, and is flexible in the sense that the semantics of the 1536 Figure 2: Different structures for beam search. Boxes represent beams which hold k-best lists of hypotheses. (A) Chart Parsing using SCFG rules to cover spans in the input. (B) Source coverage as used in PB-SMT. (C) Sequence timesteps (as used in Neural Sequence Models), GBS is an extension of (C). In (A) and (B), hypotheses are finished once they reach the final beam. In (C), a hypothesis is only complete if it has generated an end-of-sequence (EOS) symbol. graph of beams can be adapted to take advantage of additional structure that may be available for specific tasks. For example, in Phrase-Based Statistical MT (PB-SMT) (Koehn, 2010), beams are organized by the number of source words that are covered by the hypotheses in the beam – a hypothesis is “finished” when it has covered all source words. In chart-based decoding algorithms such as CYK, beams are also tied to coverage of the input, but are organized as cells in a chart, which facilitates search for the optimal latent structure of the output (Chiang, 2007). Figure 2 visualizes three common ways to structure search. (A) and (B) depend upon explicit structural information between the input and output, (C) only assumes that the output is a sequence where later symbols depend upon earlier ones. Note also that (C) corresponds exactly to the bottom rows of Figures 1 and 3. With the recent success of neural models for text generation, beam search has become the de-facto choice for decoding optimal output sequences (Sutskever et al., 2014). However, with neural sequence models, we cannot organize beams by their explicit coverage of the input. A simpler alternative is to organize beams by output timesteps from t0 · · · tN, where N is a hyperparameter that can be set heuristically, for example by multiplying a factor with the length of the input to make an educated guess about the maximum length of the output (Sutskever et al., 2014). Output sequences are generally considered complete once a special “end-of-sentence”(EOS) token has been generated. Beam size in these models is also typically kept small, and recent work has shown Figure 3: Visualizing the lexically constrained decoder’s complete search graph. Each rectangle represents a beam containing k hypotheses. Dashed (diagonal) edges indicate starting or continuing constraints. Horizontal edges represent generating from the model’s distribution. The horizontal axis covers the timesteps in the output sequence, and the vertical axis covers the constraint tokens (one row for each token in each constraint). Beams on the top level of the grid contain hypotheses which cover all constraints. that the performance of some architectures can actually degrade with larger beam size (Tu et al., 2016). 3 Grid Beam Search Our goal is to organize decoding in such a way that we can constrain the search space to outputs which contain one or more pre-specified sub-sequences. We thus wish to use a model’s distribution both to “place” lexical constraints correctly, and to generate the parts of the output which are not covered by the constraints. Algorithm 1 presents the pseudo-code for lexically constrained decoding, see Figures 1 and 3 for visualizations of the search process. Beams in the grid are indexed by t and c. The t variable tracks the timestep of the search, while the c variable indicates how many constraint tokens are covered by the hypotheses in the current beam. Note that each step of c covers a single constraint token. In other words, constraints is an array of sequences, where individual tokens can be indexed as constraintsij, i.e. tokenj in constrainti. The numC parameter in Algorithm 1 represents the total number of tokens in all constraints. The hypotheses in a beam can be separated into two types (see lines 9-11 and 15-19 of Algorithm 1): 1. open hypotheses can either generate from the model’s distribution, or start available constraints, 2. closed hypotheses can only generate the next 1537 Algorithm 1 Pseudo-code for Grid Beam Search, note that t and c indices are 0-based 1: procedure CONSTRAINEDSEARCH(model, input, constraints, maxLen, numC, k) 2: startHyp ⇐model.getStartHyp(input, constraints) 3: Grid ⇐initGrid(maxLen, numC, k) ▷initialize beams in grid 4: Grid[0][0] = startHyp 5: for t = 1, t++, t < maxLen do 6: for c = max(0, (numC + t) −maxLen), c++, c ≤min(t, numC) do 7: n, s, g = ∅ 8: for each hyp ∈Grid[t −1][c] do 9: if hyp.isOpen() then 10: g ⇐g S model.generate(hyp, input, constraints) ▷generate new open hyps 11: end if 12: end for 13: if c > 0 then 14: for each hyp ∈Grid[t −1][c −1] do 15: if hyp.isOpen() then 16: n ⇐n S model.start(hyp, input, constraints) ▷start new constrained hyps 17: else 18: s ⇐s S model.continue(hyp, input, constraints) ▷continue unfinished 19: end if 20: end for 21: end if 22: Grid[t][c] = k-argmax h∈n S s S g model.score(h) ▷k-best scoring hypotheses stay on the beam 23: end for 24: end for 25: topLevelHyps ⇐Grid[:][numC] ▷get hyps in top-level beams 26: finishedHyps ⇐hasEOS(topLevelHyps) ▷finished hyps have generated the EOS token 27: bestHyp ⇐ argmax h∈finishedHyps model.score(h) 28: return bestHyp 29: end procedure token for in a currently unfinished constraint. At each step of the search the beam at Grid[t][c] is filled with candidates which may be created in three ways: 1. the open hypotheses in the beam to the left (Grid[t −1][c]) may generate continuations from the model’s distribution pθ(yi|x, {y0 . . . yi−1}), 2. the open hypotheses in the beam to the left and below (Grid[t−1][c−1]) may start new constraints, 3. the closed hypotheses in the beam to the left and below (Grid[t−1][c−1]) may continue constraints. Therefore, the model in Algorithm 1 implements an interface with three functions: generate, start, and continue, which build new hypotheses in each of the three ways. Note that the scoring function of the model does not need to be aware of the existence of constraints, but it may be, for example via a feature which indicates if a hypothesis is part of a constraint or not. The beams at the top level of the grid (beams where c = numConstraints) contain hypotheses which cover all of the constraints. Once a hypothesis on the top level generates the EOS token, it can be added to the set of finished hypotheses. The highest scoring hypothesis in the set of finished hypotheses is the best sequence which covers all constraints.1 1Our implementation of GBS is available at https: //github.com/chrishokamp/constrained_ decoding 1538 3.1 Multi-token Constraints By distinguishing between open and closed hypotheses, we can allow for arbitrary multi-token phrases in the search. Thus, the set of constraints for a particular output may include both individual tokens and phrases. Each hypothesis maintains a coverage vector to ensure that constraints cannot be repeated in a search path – hypotheses which have already covered constrainti can only generate, or start constraints that have not yet been covered. Note also that discontinuous lexical constraints, such as phrasal verbs in English or German, are easy to incorporate into GBS, by adding filters to the search, which require that one or more conditions must be met before a constraint can be used. For example, adding the phrasal verb “ask ⟨someone⟩out” as a constraint would mean using “ask” as constraint0 and “out” as constraint1, with two filters: one requiring that constraint1 cannot be used before constraint0, and another requiring that there must be at least one generated token between the constraints. 3.2 Subword Units Both the computation of the score for a hypothesis, and the granularity of the tokens (character, subword, word, etc...) are left to the underlying model. Because our decoder can handle arbitrary constraints, there is a risk that constraints will contain tokens that were never observed in the training data, and thus are unknown by the model. Especially in domain adaptation scenarios, some userspecified constraints are very likely to contain unseen tokens. Subword representations provide an elegant way to circumvent this problem, by breaking unknown or rare tokens into character n-grams which are part of the model’s vocabulary (Sennrich et al., 2016; Wu et al., 2016). In the experiments in Section 4, we use this technique to ensure that no input tokens are unknown, even if a constraint contains words which never appeared in the training data.2 3.3 Efficiency Because the number of beams is multiplied by the number of constraints, the runtime complexity of a naive implementation of GBS is O(ktc). Standard time-based beam search is O(kt); therefore, 2If a character that was not observed in training data is observed at prediction time, it will be unknown. However, we did not observe this in any of our experiments. some consideration must be given to the efficiency of this algorithm. Note that the beams in each column c of Figure 3 are independent, meaning that GBS can be parallelized to allow all beams at each timestep to be filled simultaneously. Also, we find that the most time is spent computing the states for the hypothesis candidates, so by keeping the beam size small, we can make GBS significantly faster. 3.4 Models The models used for our experiments are stateof-the-art Neural Machine Translation (NMT) systems using our own implementation of NMT with attention over the source sequence (Bahdanau et al., 2014). We used Blocks and Fuel to implement our NMT models (van Merrinboer et al., 2015). To conduct the experiments in the following section, we trained baseline translation models for English–German (EN-DE), English– French (EN-FR), and English–Portuguese (ENPT). We created a shared subword representation for each language pair by extracting a vocabulary of 80000 symbols from the concatenated source and target data. See the Appendix for more details on our training data and hyperparameter configuration for each language pair. The beamSize parameter is set to 10 for all experiments. Because our experiments use NMT models, we can now be more explicit about the implementations of the generate, start, and continue functions for this GBS instantiation. For an NMT model at timestep t, generate(hypt−1) first computes a vector of output probabilities ot = softmax(g(yt−1, si, ci))3 using the state information available from hypt−1. and returns the best k continuations, i.e. Eq. 4: gt = k-argmax i oti. (4) The start and continue functions simply index into the softmax output of the model, selecting specific tokens instead of doing a k-argmax over the entire target language vocabulary. For example, to start constraint ci, we find the score of token ci0 , i.e. otci0. 4 Experiments 4.1 Pick-Revise for Interactive Post Editing Pick-Revise is an interaction cycle for MT PostEditing proposed by Cheng et al. (2016). Starting 3we use the notation for the g function from Bahdanau et al. (2014) 1539 ITERATION 0 1 2 3 Strict Constraints EN-DE 18.44 27.64 (+9.20) 36.66 (+9.01) 43.92 (+7.26) EN-FR 28.07 36.71 (+8.64) 44.84 (+8.13) 45.48 +(0.63) EN-PT* 15.41 23.54 (+8.25) 31.14 (+7.60) 35.89 (+4.75) Relaxed Constraints EN-DE 18.44 26.43 (+7.98) 34.48 (+8.04) 41.82 (+7.34) EN-FR 28.07 33.8 (+5.72) 40.33 (+6.53) 47.0 (+6.67) EN-PT* 15.41 23.22 (+7.80) 33.82 (+10.6) 40.75 (+6.93) Table 1: Results for four simulated editing cycles using WMT test data. EN-DE uses newstest2013, EN-FR uses newstest2014, and EN-PT uses the Autodesk corpus discussed in Section 4.2. Improvement in BLEU score over the previous cycle is shown in parentheses. * indicates use of our test corpus created from Autodesk post-editing data. with the original translation hypothesis, a (simulated) user first picks a part of the hypothesis which is incorrect, and then provides the correct translation for that portion of the output. The userprovided correction is then used as a constraint for the next decoding cycle. The Pick-Revise process can be repeated as many times as necessary, with a new constraint being added at each cycle. We modify the experiments of Cheng et al. (2016) slightly, and assume that the user only provides sequences of up to three words which are missing from the hypothesis.4 To simulate user interaction, at each iteration we chose a phrase of up to three tokens from the reference translation which does not appear in the current MT hypotheses. In the strict setting, the complete phrase must be missing from the hypothesis. In the relaxed setting, only the first word must be missing. Table 1 shows results for a simulated editing session with four cycles. When a three-token phrase cannot be found, we backoff to two-token phrases, then to single tokens as constraints. If a hypothesis already matches the reference, no constraints are added. By specifying a new constraint of up to three words at each cycle, an increase of over 20 BLEU points is achieved in all language pairs. 4.2 Domain Adaptation via Terminology The requirement for use of domain-specific terminologies is common in real-world applications of MT (Crego et al., 2016). Existing approaches incorporate placeholder tokens into NMT systems, which requires modifying the pre- and post- processing of the data, and training the system with 4NMT models do not use explicit alignment between source and target, so we cannot use alignment information to map target phrases to source phrases data that contains the same placeholders which occur in the test data (Crego et al., 2016). The MT system also loses any possibility to model the tokens in the terminology, since they are represented by abstract tokens such as “⟨TERM 1⟩”. An attractive alternative is to simply provide term mappings as constraints, allowing any existing system to adapt to the terminology used in a new test domain. For the target domain data, we use the Autodesk Post-Editing corpus (Zhechev, 2012), which is a dataset collected from actual MT post-editing sessions. The corpus is focused upon software localization, a domain which is likely to be very different from the WMT data used to train our general domain models. We divide the corpus into approximately 100,000 training sentences, and 1000 test segments, and automatically generate a terminology by computing the Pointwise Mutual Information (PMI) (Church and Hanks, 1990) between source and target n-grams in the training set. We extract all n-grams from length 2-5 as terminology candidates. pmi(x; y) = log p(x, y) p(x)p(y) (5) npmi(x; y) = pmi(x; y) h(x, y) (6) Equations 5 and 6 show how we compute the normalized PMI for a terminology candidate pair. The PMI score is normalized to the range [−1, +1] by dividing by the entropy h of the joint probability p(x, y). We then filter the candidates to only include pairs whose PMI is ≥0.9, and where both the source and target phrases occur at least five times in the corpus. When source phrases that match the terminology are observed in the test 1540 data, the corresponding target phrase is added to the constraints for that segment. Results are shown in Table 2. As a sanity check that improvements in BLEU are not merely due to the presence of the terms somewhere in the output, i.e. that the placement of the terms by GBS is reasonable, we also evaluate the results of randomly inserting terms into the baseline output, and of prepending terms to the baseline output. This simple method of domain adaptation leads to a significant improvement in the BLEU score without any human intervention. Surprisingly, even an automatically created terminology combined with GBS yields performance improvements of approximately +2 BLEU points for EnDe and En-Fr, and a gain of almost 14 points for En-Pt. The large improvement for En-Pt is probably due to the training data for this system being very different from the IT domain (see Appendix). Given the performance improvements from our automatically extracted terminology, manually created domain terminologies with good coverage of the test domain are likely to lead to even greater gains. Using a terminology with GBS is likely to be beneficial in any setting where the test domain is significantly different from the domain of the model’s original training data. System BLEU EN-DE Baseline 26.17 Random 25.18 (-0.99) Beginning 26.44 (+0.26) GBS 27.99 (+1.82) EN-FR Baseline 32.45 Random 31.48 (-0.97) Beginning 34.51 (+2.05) GBS 35.05 (+2.59) EN-PT Baseline 15.41 Random 18.26 (+2.85) Beginning 20.43 (+5.02) GBS 29.15 (+13.73) Table 2: BLEU Results for EN-DE, EN-FR, and EN-PT terminology experiments using the Autodesk Post-Editing Corpus. ”Random’ indicates inserting terminology constraints at random positions in the baseline translation. ”Beginning” indicates prepending constraints to baseline translations. 4.3 Analysis Subjective analysis of decoder output shows that phrases added as constraints are not only placed correctly within the output sequence, but also have global effects upon translation quality. This is a desirable effect for user interaction, since it implies that users can bootstrap quality by adding the most critical constraints (i.e. those that are most essential to the output), first. Table 3 shows several examples from the experiments in Table 1, where the addition of lexical constraints was able to guide our NMT systems away from initially quite low-scoring hypotheses to outputs which perfectly match the reference translations. 5 Related Work Most related work to date has presented modifications of SMT systems for specific usecases which constrain MT output via auxilliary inputs. The largest body of work considers Interactive Machine Translation (IMT): an MT system searches for the optimal target-language suffix given a complete source sentence and a desired prefix for the target output (Foster, 2002; Barrachina et al., 2009; Green, 2014). IMT can be viewed as subcase of constrained decoding, where there is only one constraint which is guaranteed to be placed at the beginning of the output sequence. Wuebker et al. (2016) introduce prefix-decoding, which modifies the SMT beam search to first ensure that the target prefix is covered, and only then continues to build hypotheses for the suffix using beams organized by coverage of the remaining phrases in the source segment. Wuebker et al. (2016) and Knowles and Koehn (2016) also present a simple modification of NMT models for IMT, enabling models to predict suffixes for user-supplied prefixes. Recently, some attention has also been given to SMT decoding with multiple lexical constraints. The Pick-Revise (PRIMT) (Cheng et al., 2016) framework for Interactive Post Editing introduces the concept of edit cycles. Translators specify constraints by editing a part of the MT output that is incorrect, and then asking the system for a new hypothesis, which must contain the user-provided correction. This process is repeated, maintaining constraints from previous iterations and adding new ones as needed. Importantly, their approach relies upon the phrase segmentation provided by the SMT system. The decoding algorithm can 1541 EN-DE Source He was also an anti- smoking activist and took part in several campaigns . Original Hypothesis Es war auch ein Anti- Rauch- Aktiv- ist und nahmen an mehreren Kampagnen teil . Reference Constraints Ebenso setzte er sich gegen das Rauchen ein und nahm an mehreren Kampagnen teil . (1) Ebenso setzte er Constrained Hypothesis (2) gegen das Rauchen Ebenso setzte er sich gegen das Rauchen ein und nahm an mehreren Kampagnen teil . (3) nahm EN-FR Source At that point I was no longer afraid of him and I was able to love him . Original Hypothesis Je n’avais plus peur de lui et j’`etais capable de l’aimer . Reference Constraints L´a je n’ai plus eu peur de lui et j’ai pu l’aimer . (1) L´a je n’ai Constrained Hypothesis (2) j’ai pu L´a je n’ai plus eu peur de lui et j’ai pu l’aimer . (3) eu EN-PT Source Mo- dif- y drain- age features by selecting them individually . Original Hypothesis - J´a temos as caracter´ısticas de extracc¸˜ao de idade , com eles individualmente . Reference Constraints Modi- fique os recursos de drenagem ao selec- ion- ´a-los individualmente . (1) drenagem ao selecConstrained Hypothesis (2) Modi- fique os Modi- fique os recursos de drenagem ao selec- ion- ´a-los individualmente . (3) recursos Table 3: Manual analysis of examples from lexically constrained decoding experiments. “-” followed by whitespace indicates the internal segmentation of the translation model (see Section 3.2) only make use of constraints that match phrase boundaries, because constraints are implemented as “rules” enforcing that source phrases must be translated as the aligned target phrases that have been selected as constraints. In contrast, our approach decodes at the token level, and is not dependent upon any explicit structure in the underlying model. Domingo et al. (2016) also consider an interactive scenario where users first choose portions of an MT hypothesis to keep, then query for an updated translation which preserves these portions. The MT system decodes the source phrases which are not aligned to the user-selected phrases until the source sentence is fully covered. This approach is similar to the system of Cheng et al., and uses the “XML input” feature in Moses (Koehn et al., 2007). Some recent work considers the inclusion of soft lexical constraints directly into deep models for dialog generation, and special cases, such as recipe generation from a list of ingredients (Wen et al., 2015; Kiddon et al., 2016). Such constraintaware models are complementary to our work, and could be used with GBS decoding without any change to the underlying models. To the best of our knowledge, ours is the first work which considers general lexically constrained decoding for any model which outputs sequences, without relying upon alignments between input and output, and without using a search organized by coverage of the input. 6 Conclusion Lexically constrained decoding is a flexible way to incorporate arbitrary subsequences into the output of any model that generates output sequences token-by-token. A wide spectrum of popular text generation models have this characteristic, and GBS should be straightforward to use with any model that already uses beam search. In translation interfaces where translators can provide corrections to an existing hypothesis, these user inputs can be used as constraints, generating a new output each time a user fixes an error. By simulating this scenario, we have shown that such a workflow can provide a large improvement in translation quality at each iteration. By using a domain-specific terminology to generate target-side constraints, we have shown that a general domain model can be adapted to a new domain without any retraining. Surprisingly, this simple method can lead to significant performance gains, even when the terminology is created automatically. In future work, we hope to evaluate GBS with models outside of MT, such as automatic summarization, image captioning or dialog generation. We also hope to introduce new constraintaware models, for example via secondary attention mechanisms over lexical constraints. 1542 Acknowledgments This project has received funding from Science Foundation Ireland in the ADAPT Centre for Digital Content Technology (www.adaptcentre.ie) at Dublin City University funded under the SFI Research Centres Programme (Grant 13/RC/2106) co-funded under the European Regional Development Fund and the European Union Horizon 2020 research and innovation programme under grant agreement 645452 (QT21). We thank the anonymous reviewers, as well as Iacer Calixto, Peyman Passban, and Henry Elder for helpful feedback on early versions of this work. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 . Sergio Barrachina, Oliver Bender, Francisco Casacuberta, Jorge Civera, Elsa Cubel, Shahram Khadivi, Antonio Lagarda, Hermann Ney, Jes´us Tom´as, Enrique Vidal, and Juan-Miguel Vilar. 2009. Statistical approaches to computer-assisted translation. Computational Linguistics 35(1):3–28. https://doi.org/10.1162/coli.2008.07-055-R2-06-29. Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Barry Haddow, Matthias Huck, Chris Hokamp, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Carolina Scarton, Lucia Specia, and Marco Turchi. 2015. Findings of the 2015 workshop on statistical machine translation. In Proceedings of the Tenth Workshop on Statistical Machine Translation. Association for Computational Linguistics, Lisbon, Portugal, pages 1–46. http://aclweb.org/anthology/W15-3001. Shanbo Cheng, Shujian Huang, Huadong Chen, Xinyu Dai, and Jiajun Chen. 2016. PRIMT: A pickrevise framework for interactive machine translation. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016. pages 1240–1249. http://aclweb.org/anthology/N/N16/N16-1148.pdf. David Chiang. 2007. Hierarchical phrase-based translation. Comput. Linguist. 33(2):201–228. https://doi.org/10.1162/coli.2007.33.2.201. Kyunghyun Cho, Bart van Merri¨enboer, C¸ alar G¨ulc¸ehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Doha, Qatar, pages 1724–1734. http://www.aclweb.org/anthology/D141179. Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography. Comput. Linguist. 16(1):22–29. http://dl.acm.org/citation.cfm?id=89086.89095. Josep Maria Crego, Jungi Kim, Guillaume Klein, Anabel Rebollo, Kathy Yang, Jean Senellart, Egor Akhanov, Patrice Brunelle, Aurelien Coquard, Yongchao Deng, Satoshi Enoue, Chiyo Geiss, Joshua Johanson, Ardas Khalsa, Raoum Khiari, Byeongil Ko, Catherine Kobus, Jean Lorieux, Leidiana Martins, Dang-Chuan Nguyen, Alexandra Priori, Thomas Riccardi, Natalia Segal, Christophe Servan, Cyril Tiquet, Bo Wang, Jin Yang, Dakun Zhang, Jing Zhou, and Peter Zoldan. 2016. Systran’s pure neural machine translation systems. CoRR abs/1610.05540. http://arxiv.org/abs/1610.05540. Miguel Domingo, Alvaro Peris, and Francisco Casacuberta. 2016. Interactive-predictive translation based on multiple word-segments. Baltic J. Modern Computing 4(2):282–291. George F. Foster. 2002. Text Prediction for Translators. Ph.D. thesis, Montreal, P.Q., Canada, Canada. AAINQ72434. Spence Green. 2014. Mixed-Initiative Natural Language Translation. Ph.D. thesis, Stanford, CA, United States. Chlo´e Kiddon, Luke Zettlemoyer, and Yejin Choi. 2016. Globally coherent text generation with neural checklist models. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016. pages 329–339. http://aclweb.org/anthology/D/D16/D16-1032.pdf. Rebecca Knowles and Philipp Koehn. 2016. Neural interactive translation prediction. AMTA 2016, Vol. page 107. Philipp Koehn. 2009. A process study of computeraided translation. Machine Translation 23(4):241– 263. https://doi.org/10.1007/s10590-010-9076-3. Philipp Koehn. 2010. Statistical Machine Translation. Cambridge University Press, New York, NY, USA, 1st edition. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions. Association for Computational Linguistics, 1543 Stroudsburg, PA, USA, ACL ’07, pages 177–180. http://dl.acm.org/citation.cfm?id=1557769.1557821. Franz Josef Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Comput. Linguist. 30(4):417–449. https://doi.org/10.1162/0891201042544884. Judea Pearl. 1984. Heuristics: Intelligent Search Strategies for Computer Problem Solving. AddisonWesley Longman Publishing Co., Inc., Boston, MA, USA. Alexander Rush, Yin-Wen Chang, and Michael Collins. 2013. Optimal beam search for machine translation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Seattle, Washington, USA, pages 210–221. http://www.aclweb.org/anthology/D13-1022. Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Llus Mrquez, Chris Callison-Burch, Jian Su, Daniele Pighin, and Yuval Marton, editors, EMNLP. The Association for Computational Linguistics, pages 379–389. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. http://aclweb.org/anthology/P/P16/P16-1162.pdf. Iulian V. Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence. AAAI Press, AAAI’16, pages 3776–3783. http://dl.acm.org/citation.cfm?id=3016387.3016435. Jason R. Smith, Herve Saint-amand, Chris Callisonburch, Magdalena Plamada, and Adam Lopez. 2013. Dirt cheap web-scale parallel text from the common crawl. In In Proceedings of the Conference of the Association for Computational Linguistics (ACL. Lucia Specia. 2011. Exploiting objective annotations for measuring translation post-editing effort. In Proceedings of the European Association for Machine Translation. May. Ralf Steinberger, Bruno Pouliquen, Anna Widiger, Camelia Ignat, Toma Erjavec, and Dan Tufi. 2006. The jrc-acquis: A multilingual aligned parallel corpus with 20+ languages. In In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC’2006. pages 2142–2147. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems. MIT Press, Cambridge, MA, USA, NIPS’14, pages 3104–3112. http://dl.acm.org/citation.cfm?id=2969033.2969173. Zhaopeng Tu, Yang Liu, Lifeng Shang, Xiaohua Liu, and Hang Li. 2016. Neural machine translation with reconstruction. arXiv preprint arXiv:1611.01874 . Bart van Merrinboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy Serdyuk, David Warde-Farley, Jan Chorowski, and Yoshua Bengio. 2015. Blocks and fuel: Frameworks for deep learning. CoRR abs/1506.00619. Tsung-Hsien Wen, Milica Gaˇsi´c, Nikola Mrkˇsi´c, PeiHao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned lstm-based natural language generation for spoken dialogue systems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, ukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR abs/1609.08144. http://arxiv.org/abs/1609.08144. Joern Wuebker, Spence Green, John DeNero, Sasa Hasan, and Minh-Thang Luong. 2016. Models and inference for prefix-constrained machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 66–75. http://www.aclweb.org/anthology/P16-1007. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In David Blei and Francis Bach, editors, Proceedings of the 32nd International Conference on Machine Learning (ICML-15). JMLR Workshop and Conference Proceedings, pages 2048–2057. http://jmlr.org/proceedings/papers/v37/xuc15.pdf. Matthew D. Zeiler. 2012. ADADELTA: an adaptive learning rate method. CoRR abs/1212.5701. http://arxiv.org/abs/1212.5701. Ventsislav Zhechev. 2012. Machine Translation Infrastructure and Post-editing Performance at Autodesk. 1544 In AMTA 2012 Workshop on Post-Editing Technology and Practice (WPTP 2012). Association for Machine Translation in the Americas (AMTA), San Diego, USA, pages 87–96. 1545 A NMT System Configurations We train all systems for 500000 iterations, with validation every 5000 steps. The best single model from validation is used in all of the experiments for a language pair. We use ℓ2 regularization on all parameters with α = 1e−5. Dropout is used on the output layers with p(drop) = 0.5. We sort minibatches by source sentence length, and reshuffle training data after each epoch. All systems use a bidirectional GRUs (Cho et al., 2014) to create the source representation and GRUs for the decoder transition. We use AdaDelta (Zeiler, 2012) to update gradients, and clip large gradients to 1.0. Training Configurations EN-DE Embedding Size 300 Recurrent Layers Size 1000 Source Vocab Size 80000 Target Vocab Size 90000 Batch Size 50 EN-FR Embedding Size 300 Recurrent Layers Size 1000 Source Vocab Size 66000 Target Vocab Size 74000 Batch Size 40 EN-PT Embedding Size 200 Recurrent Layers Size 800 Source Vocab Size 60000 Target Vocab Size 74000 Batch Size 40 A.1 English-German Our English-German training corpus consists of 4.4 Million segments from the Europarl (Bojar et al., 2015) and CommonCrawl (Smith et al., 2013) corpora. A.2 English-French Our English-French training corpus consists of 4.9 Million segments from the Europarl and CommonCrawl corpora. A.3 English-Portuguese Our English-Portuguese training corpus consists of 28.5 Million segments from the Europarl, JRCAquis (Steinberger et al., 2006) and OpenSubtitles5 corpora. 5http://www.opensubtitles.org/ 1546
2017
141
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1547–1556 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1142 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1547–1556 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1142 Combating Human Trafficking with Deep Multimodal Models Edmund Tong* Language Technologies Institute Carnegie Mellon University [email protected] Amir Zadeh* Language Technologies Institute Carnegie Mellon University [email protected] Cara Jones Marinus Analytics, LLC [email protected] Louis-Philippe Morency Language Technologies Institute Carnegie Mellon University [email protected] Abstract Human trafficking is a global epidemic affecting millions of people across the planet. Sex trafficking, the dominant form of human trafficking, has seen a significant rise mostly due to the abundance of escort websites, where human traffickers can openly advertise among at-will escort advertisements. In this paper, we take a major step in the automatic detection of advertisements suspected to pertain to human trafficking. We present a novel dataset called Trafficking-10k, with more than 10,000 advertisements annotated for this task. The dataset contains two sources of information per advertisement: text and images. For the accurate detection of trafficking advertisements, we designed and trained a deep multimodal model called the Human Trafficking Deep Network (HTDN). 1 Introduction Human trafficking “a crime that shames us all” (UNODC, 2008), has seen a steep rise in the United States since 2012. The number of cases reported rose from 3,279 in 2012 to 7,572 in 2016—more than doubling over the course of five years (Hotline). Sex trafficking is a form of human trafficking, and is a global epidemic affecting millions of people each year (McCarthy, 2014). Victims of sex trafficking are subjected to coercion, force, and control, and are not able to ask for help. Put plainly, sex trafficking is modern-day slavery and is one of the top priorities of law enforcement agencies at all levels. A major advertising ground for human traffickers is the World Wide Web. The Internet has brought * Authors contributed equally. traffickers the ability to advertise online and has fostered the growth of numerous adult escort sites. Each day, there are tens of thousands of Internet advertisements posted in the United States and Canada that market commercial sex. Hiding among the noise of at-will adult escort ads are ads posted by sex traffickers. Often long undetected, trafficking rings and escort websites form a profit cycle that fuels the increase of both trafficking rings and escort websites. For law enforcement, this presents a significant challenge: how should we identify advertisements that are associated with sex trafficking? Police have limited human and technical resources, and manually sifting through thousands of ads in the hopes of finding something suspicious is a poor use of those resources, even if they know what they are looking for. Leveraging state-of-the-art machine learning approaches in Natural Language Processing and computer vision to detect and report advertisements suspected of trafficking is the main focus of our work. In other words, we strive to find the victims and perpetrators of trafficking who hide in plain sight in the massive amounts of data online. By narrowing down the number of advertisements that law enforcement must sift through, we endeavor to provide a real opportunity for law enforcement to intervene in the lives of victims. However, there are non-trivial challenges facing this line of research: Adversarial Environment. Human trafficking rings are aware that law enforcement monitors their online activity. Over the years, law enforcement officers have populated lists of keywords that frequently occur in trafficking advertisements. However, these simplistic queries fail when traffickers use complex obfuscation. Traffickers, again aware of this, move to new keywords to blend in with the at-will escort advertisements. This trend creates an adversarial environment for any machine learning 1547 system that attempts to find trafficking rings hiding in plain sight. Defective Language Compositionality. Online escort advertisements are difficult to analyze, because they lack grammatical structures such as constituency. Therefore, any form of inference must rely more on context than on grammar. This presents a significant challenge to the NLP community. Furthermore, the majority of the ads contain emojis and non-English characters. Generalizable Language Context. Machine learning techniques can easily learn unreliable cues in training sets such as phone numbers, keywords, and other forms of semantically unreliable discriminators to reduce the training loss. Due to limited similarity between the training and test data due to the large number of ads available online, relying on these cues is futile. Learned discriminative features should be generalizable and model semantics of trafficking. Multimodal Nature. Escort advertisements are composed of both textual and visual information. Our model should treat these features interdependently. For instance, if the text indicates that the escort is in a hotel room, our model should consider the effect that such knowledge may have on the importance of certain visual features. We believe that studying human trafficking advertisements can be seen as a fundamental challenge to the NLP, computer vision, and machine learning communities dealing with language and vision problems. In this paper, we present the following contributions to this research direction. First, we study the language and vision modalities of the escort advertisements through deep neural modeling. Second, we take a significant step in automatic detection of advertisements suspected of sex trafficking. While previous methods (Dubrawski et al., 2015) have used simplistic classifiers, we build an end-to-end-trained multimodal deep model called the Human Trafficking Deep Network (HTDN). The HTDN uses information from both text and images to extract cues of human trafficking, and shows outstanding performance compared to previously used models. Third, we present the first rigorously annotated dataset for detection of human trafficking, called Trafficking-10k, which includes more than 10,000 trafficking ads labeled with likelihoods of having been posted by traffickers.1 1Due to the sensitive nature of this dataset, access can only be granted by emailing Cara Jones. Different levels of access 2 Related Works Automatic detection of human trafficking has been a relatively unexplored area of machine learning research. Very few machine learning approaches have been proposed to detect signs of human trafficking online. Most of these approaches use simplistic methods such as multimedia matching (Zhou et al., 2016), text-based filtering classifiers such as random forests, logistic regression, and SVMs (Dubrawski et al., 2015), and named-entity recognition to isolate the instances of trafficking (Nagpal et al., 2015). Studies have suggested using statistical methods to find keywords and signs of trafficking from data to help law enforcement agencies (Kennedy, 2012) as well as adult content filtering using textual information (Zhou et al., 2016). Multimodal approaches have gained popularity over the past few years. These multimodal models have been used for medical purposes, such as detection of suicidal risk, PTSD and depression (Scherer et al., 2016; Venek et al., 2016; Yu et al., 2013; Valstar et al., 2016); sentiment analysis (Zadeh et al., 2016b; Poria et al., 2016; Zadeh et al., 2016a); emotion recognition (Poria et al., 2017); image captioning and media description (You et al., 2016; Donahue et al., 2015); question answering (Antol et al., 2015); and multimodal translation (Specia et al., 2016). To the best of our knowledge, this paper presents the first multimodal and deep model for detection of human trafficking. 3 Trafficking-10k Dataset In this section, we present the dataset for our studies. We formalize the problem of recognizing sex trafficking as a machine learning task. The input data is text and images; this is mapped to a measure of how suspicious the advertisement is with regards to human trafficking. 3.1 Data Acquisition and Preprocessing A subset of 10,000 ads were sampled randomly from a large cache of escort ads for annotation in Trafficking-10k dataset. The distribution of advertisements across the United States and Canada is shown in Figure 1, which indicates the diversity of advertisements in Trafficking-10k. This diversity ensures that models trained on Trafficking-10k can be applicable nationwide. The 10,000 collected ads are provided only to scientific community. 1548 Figure 1: Distribution of advertisements in Trafficking-10k dataset across United States and Canada. each consist of text and zero or more images. The text in the dataset is in plain text format, derived by stripping the HTML tags from the raw source of the ads. The set of characters in each advertisement is encoded as UTF-8, because there is ample usage of smilies and non-English characters. Advertisements are truncated to the first 184 words, as this covers more than 90% of the ads. Images are resized to 224 × 224 pixels with RGB channels. 3.2 Trafficking Annotation Detecting whether or not an advertisement is suspicious requires years of practice and experience in working closely with law enforcement. As a result, annotation is a highly complicated and expensive process, which cannot be scaled using crowdsourcing. In our dataset, annotation is carried out by two expert annotators, each with at least five years of experience, in detection of human trafficking and another annotator with one year of experience. In our dataset, annotations were done by three experts. One expert has over a year of experience, and the other two have over five years of experience in the human trafficking domain. To calculate the interannotator agreement, each annotator is given the same set of 1000 ads to annotate and the nominal agreement is found: there was a 83% pairwise agreement (0.62 Krippendorff’s alpha). Also, to make sure that annotations are generalizable across the annotators and law enforcement officers, two law enforcement officers annotated, respectively, a subset of 500 and 100 of the advertisements. We found a 62% average pairwise agreement (0.42 Krippendorff’s alpha) with our annotators. This gap is reasonable, as law enforcement officers only have experience with local advertisements, while Trafficking-10k annotators have experience with cases across the United States. Annotators used an annotation interface specifically designed for the Trafficking-10k dataset. In the annotation interface, each advertisement was displayed on a separate webpage. The order of the advertisements is determined uniformly randomly, and annotators were unable to move to the next advertisement without labeling the current one. For each advertisement, the annotator was presented with the question: “In your opinion, would you consider this advertisement suspicious of human trafficking?” The annotator is presented with the following options: “Certainly no,” “Likely no,” “Weakly no,” “Unsure,”2 “Weakly yes,” “Likely yes,” and “Certainly yes.” Thus, the degree to which advertisements are suspicious is quantized into seven levels. 3.3 Analysis of Language The language used in these advertisements introduces fundamental challenges to the field of NLP. The nature of the textual content in these advertisements raises the question of how we can make inferences in a linguistic environment with a constantly evolving lexicon. Language used in the Trafficking-10k dataset is highly inconsistent with standard grammar. Often, words are obfuscated by emojis and symbols. The word ordering is inconsistent, and there is rarely any form of constituency. This form of language is completely different from spoken and written English. These attributes make escort advertisements appear somewhat similar to tweets, specifically since these ads are normally short (more than 90% of the ads have at most 184 words). Another point of complexity in these advertisements is the high number of unigrams, due to usage of uncommon words and obfuscation. On top of unigram complexity, advertisers continuously change their writing pattern, making this problem more complex. 3.4 Dataset Statistics There are 106,954 distinct unigrams, 353,324 distinct bigrams, and 565,403 trigrams in the Trafficking-10k dataset. There are 60,337 images. The total number of distinct characters including whitespace, punctuations, and hex characters is 182. The average length of an ad is 137 words, with a 2This option is greyed out for 10 seconds to encourage annotators to make an intuitive decision. 1549 0 40 80 120 160 200 0 500 1,000 1,500 + Number of unigrams Advertisement lengths Positive Negative Figure 2: Distribution of the length of advertisements in Trafficking-10k. There is no significant difference between positive and negative cases purely based on length. standard deviation of 74, median 133. The shortest advertisement has 7 unigrams, and the longest advertisement has 1810 unigrams. There are of 106,954 distinct unigrams, 353,324 distinct bigrams and 565,403 trigrams in the Trafficking-10k dataset. The average number of images in an advertisement is 5.9; the median is 5, the minimum is 0, and the maximum is 90. The length of suspected advertisements is 134 unigrams; the standard deviation is 39, the minimum is 12, and the maximum is 666. The length of non-suspected ads is 141; the standard deviation is 85, the minimum is 7, and the maximum is 1810. The total number of suspected ads is 3257; and the total number of non-suspected ads is 6992. Figure 2 shows the histogram of number of ads based on their length. Both the positive and negative distributions are similar. This means that there is no obvious length difference between the two classes. Most of the ads have a length of 80–180 words. 4 Model In this section, we present our deep multimodal network called the Human Trafficking Deep Network (HTDN). The HTDN is a multimodal network with language and vision components. The input to the HTDN is an ad, text and images. The HTDN is shown in Figure 3. In the remainder of this section, we will outline the different parts of the HTDN, and the input features to each component. 4.1 Trafficking Word Embeddings Our approach to deal with the adversarial environment of escort ads is to use word vectors, defining words not based on their constituent characters, but rather based on their context. For instance, consider the two unigrams “cash” and “©a$h.” While these contain different characters, semantically they are the same, and they occur in the same context. Thus, our expectation is that both the unigrams will be mapped to similar vectors. Word embeddings pretrained on general domains do not cover most of the unigrams in Trafficking-10k. For instance, the GloVe embedding (Pennington et al., 2014) trained on Wikipedia covers only 49.7% of our unigrams. The first step of the HTDN pipeline is to train word vectors (Mikolov et al., 2013) based on the skip-gram model. This is especially suitable for escort ads, because skip-gram models are able to capture context without relying on word order. We train the word embedding using 1,000,000 unlabeled ads from a dataset that does not include the Trafficking-10k data. For each advertisement, the input to the trained embedding is a sequence of words ˆw = [ ˆw1, . . . , ˆwt], and the output is a sequence of 100-dimensional word vectors w = [w1, . . . , wt], where t is the size of the advertisement and wi ∈R100. Our trained word vectors cover 94.9% of the unigrams in the Trafficking-10k dataset. 4.2 Language Network Our language network is designed to deal with two challenging aspects of escort advertisements: (1) violation of constituency, and (2) presence of irrelevant information not related to trafficking but present in ads. We address both of these issues by learning a time dependent embedding at word level. This allows the model to not rely on constituency and also remember useful information from the past, should the model get overwhelmed by irrelevant information. Our proposed language network, Fl, takes as input a sequence of word vectors w = [w1, . . . , wt], and outputs a neural language representation hl. As a first step, Fl uses the word embeddings as input to a Long-Short Term Memory (LSTM) network and produces a new supervised context-aware word embedding u = [u1, . . . , ut] where ui ∈R300 is the output of the LSTM at time i. Then, u is fed into a fully connected layer with dropout p = 0.5 to produce the neural language representation hl ∈R300 according to the following formulas with weights Wl for the LSTM and implicit weights in the fully 1550 Language Network Fl d0ll@r to ... g r 8 skype · · · LSTM · · · LSTM · · · LSTM · · · LSTM hl ∈R300 Trafficking embedding ... ... ... 300σ Visual Network Fv ˆı1 ˆı2 ˆı3 ˆı4 ˆı5 Trafficking VGG ... ... ... 200σ 200σ 200σ hv ∈R5×200 Convolutional Decision Network Fd conv hm ∈R5×200×300 ⊗ 5 × 200 × 300 max pooling conv 5 × 100 × 150 max pooling 150 linear ... P[τ | hm; Wd] σ Figure 3: Overview of our proposed Human Trafficking Deep Network (HTDN). The input to HTDN is text and a set of 5 images. The text goes through the Language Network Fl to get the language representation hl and the set of 5 images go through the Vision Network Fv to get the visual representation hv. hl and hv are then fused together to get the multimodal representation hm. The Convolutional Decision Network Fd conditioned on the hm makes inference about whether or not the advertisement is suspected of trafficking connected layers, which we represent by FC: ui = LSTM (i, wi; Wl) (1) u = [u1, . . . , ut] (2) hl = FC(u). (3) The generated hl is then used as part of the HTDN pipeline, and is also trained independently to assess the performance of the language-only model. The language network Fl is the combination of the LSTM and the fully-connected network. 4.3 Vision Network Parallel to the language network, the vision network Fv takes as input advertisement images and extracts visual representations hv. The vision network takes at most five images; the median number of images per advertisement in Trafficking-10k is 5. To learn contextual and abstract information from images, we use a deep convolutional neural network called Trafficking-VGG (T-VGG), a finetuned instance of the well-known VGG network (Simonyan and Zisserman, 2014). T-VGG is a deep model with 13 consecutive convolutional layers followed by 2 fully connected layers; it does not include the softmax layer of VGG. The procedure for fine-tuning T-VGG maps each individual image to a label that comes from the advertisement, and then performs end-to-end training. For example, if there are five images in an advertisement with positive label, all five images are mapped to positive label. After fine-tuning, three fully connected layers of 200 neurons with dropout p = 0.5 are added to the network. The combination of T-VGG and the fully connected layers is the vision network Fl. We consider five images ˆı = {ˆı1, . . . ,ˆı5} from each input advertisement. If the advertisement has fewer than five images, zero-filled images are added. For each image, the output of Fv is a representation of five images i = {i1, . . . , i5}. The visual representation hv ∈R5×200 is a matrix with a size-200 representation of each of the 5 images: hv = Fv(ˆı; Wv). (4) 4.4 Multimodal Fusion Escort advertisements have complex dynamics between text and images. Often, neither linguistic nor visual cues alone can suffice to classify whether an ad is suspicious. Interactions between linguistic and visual cues can be non-trivial, so this requires an explicit joint representation for each neuron in the linguistic and visual representations. In our multimodal fusion approach we address this by calculating an outer product between language and visual representations hl and hv to build the full space of possible outcomes: hm = hl ⊗hv, (5) 1551 Figure 4: 2D t-SNE representation of different input features for baseline models. Clockwise from top left: one hot vectors with expert data, one hot vectors without expert data, visual features from Vision Network Fv, and average word vectors. These representations show that inference is not trivial in Trafficking-10k dataset. where ⊗is an outer product of the two representations. This creates a joint multimodal tensor called hm for language and visual modalities. In this tensor, every neuron in the language representation is multiplied by every neuron in vision representation, thus creating a new representation containing the information of both of them. Thus, the final fusion tensor hm ∈R5×200×300 contains information from the joint interaction of the language and visual modalities. 4.5 Convolutional Decision Network The multimodal representation hm is used as the input to the convolutional decision network Fd. Fd has two layers of convolution and max pooling with a dropout rate of p = 0.5, followed by a fully connected layer of 150 neurons with a dropout rate of p = 0.5. Performing convolutions in this space enables the model to attend to small areas of linguistic and visual cues. It can thus find correspondences between specific combinations of the linguistic and visual representations. The final decision is made by a single sigmoid neuron. 5 Experiments In our experiments, we compare the HTDN with previously used approaches for detection of trafficking suspicious ads. Furthermore, we compare the HTDN to the performance of its unimodal components. In all our experiments we perform binary classification of whether the advertisement is suspected of being related to trafficking. The main comparison method that we use is the weighted accuracy and F1-score (due to imbalance it dataset). The formulation for weighted accuracy is as follows: Wt. Acc. = TP × N/P + TN 2N (6) 1552 Model Wt. Acc. (%) F1 (%) Acc. (%) Precision (%) Recall (%) Random 50.0 68.2 Keywords Random Forest 67.0 55.2 78.1 78.2 42.6 Logistic Regression 69.9 57.8 78.4 75.5 46.8 Linear SVM 69.5 57.0 78.6 78.0 44.9 Average Trafficking Vectors Random Forest 67.3 54.1 78.0 79.3 41.1 Logistic Regression 72.2 61.7 80.2 79.2 50.6 Linear SVM 70.3 57.7 79.2 80.7 44.9 108 One-Hot Random Forest 62.4 60.7 72.6 61.5 60.0 Logistic Regression 62.5 45.1 72.2 60.0 36.1 Linear SVM 61.7 45.1 71.8 58.6 36.7 Bag of Words Random Forest 57.6 24.5 70.4 63.2 15.2 Logistic Regression 71.1 24.5 70.4 63.2 15.2 Linear SVM 71.2 24.5 70.4 63.2 15.2 HTDN Unimodal Fl 74.5 65.8 78.8 69.8 62.3 Fv [VGG] 69.1 58.4 74.2 66.7 52.0 Fv [T-VGG] 70.4 59.5 77.3 78.3 48.0 HTDN 75.3 66.5 80.0 71.4 62.2 Human 83.7 73.7 84.0 76.7 70.9 Table 1: Results of our experiments. We compare our HTDN model to various baselines using different inputs. HTDN ourperforms other baselines in both weighted accuracy and F-score. where TP (resp. TN) is true positive (resp. true negative) predictions, and P (resp. N) is the total number of positive (resp. negative) examples. 5.1 Baselines We compare the performance of the HTDN network with baseline models divided in 4 major categories Bag-of-Words Baselines. This set of baselines is designed to assess performance of off-the-shelf basic classifiers and basic language features. We train random forest, logistic regression and linear SVMs to show the performance of simple languageonly models. Keyword Baselines. These demonstrate the performance of models that use a set of 108 keywords, all highly related to trafficking, provided by law enforcement officers.3 A binary one-hot vector representing these keywords is used to train the 3Not presented in this paper due to sensitive nature of these keywords. random forest, logistic regression, and linear SVM models. 108 One-Hot Baselines. Similar to Keywords Baseline, we use feature selection technique to filter the most informative 108 words for detection of trafficking. We compare the performance of this baseline to Keywords baseline to evaluate the usefulness of expert knowledge in keywords selection vs automatic data-driven keyword selection. Average Trafficking Vectors Baselines. We assess the magnitude of success for the trafficking word embeddings for different classifiers. For the random forest, logistic regression, and linear SVM models, the average word vector is calculated and used as input. HTDN Unimodal. These baselines show the performance of unimodal components of HTDN. For language we only use Fl component of the pipeline and for visual we use Fv, using both pretrained a VGG and finetuned T-VGG. 1553 Random and Human. Random is based on assigning the more frequent class in training set to all the test data, and can be considered a lower bound for our model. Human performance metrics are upper bounds for this task’s metrics. We visualize the different inputs to our baseline models to show the complexity of the dataset when using different feature sets. Figure 4 shows the 2D t-SNE (Maaten and Hinton, 2008) representation of the training data in our dataset according to the Bag-of-Words (top right) models, expert keywords (top left), average word vectors (bottom right), and the visual representation hv bottom left. The distribution of points suggests that none of the feature representations make the classification task trivial. 5.2 Training Parameters All the models in our experiments are trained on the Trafficking-10k designated training set and tested on the designated test set. Hyperparameter evaluation is performed using a subset of training set as validation set. The HTDN model is trained using the Adam optimizer (Kingma and Ba, 2014). The neural weights were initialized randomly using Xavier initialization technique (Glorot and Bengio, 2010). The random forest model uses 10 estimators, with no maximum depth, and minimum-samplesper-split value of 2. The linear SVM model uses an ℓ2-penalty and a square hinge loss with C = 1. 6 Results and Discussion The results of our experiments are shown in Table 1. We report the results on three metrics: F1-score, weighted accuracy, and accuracy. Due to the imbalance between the numbers of positive and negative samples, weighted accuracy is more informative than unweighted accuracy, so we focus on the former. HTDN. The first observation from Table 1 is that the HTDN model outperforms all the proposed baselines. There is a significant gap between the HTDN (and variants) and other non-neural approaches. This better performance is an indicator of complex interactions in detecting dynamics of human trafficking, which is captured by the HTDN. Both Modalities are Helpful. Both modalities are helpful in predicting signs of trafficking (Fl and Fv [T-VGG]). Fine-tuning VGG network parameters shows improvement over pre-trained VGG parameters. Language is More Important. Since Fl shows better performance than Fv [T-VGG], the language modality appears to be the more informative modality for detecting trafficking suspicious ads. 7 Conclusion and Future Work In this paper, we took a major step in multimodal modeling of suspected online trafficking advertisements. We presented a novel dataset, Trafficking10k, with more than 10,000 advertisements annotated for this task. The dataset contains two modalities of information per advertisement: text and images. We designed a deep multimodal model called the Human Trafficking Deep Network (HTDN). We compared the performance of the HTDN to various models that use language and vision alone. The HTDN outperformed all of these, indicating that using information from both sources may be more helpful than using just one. Exploring language through character modeling. In order to eliminate the need for retraining the word vectors as the language of the domain evolves, we plan to use character models to learn a better language model for trafficking. As new obfuscated words are introduced in escort advertisements, our hope is that character models will stay invariant to these obfuscations. Understanding images. While CNNs have proven to be useful for many different computer vision tasks, we seek to improve the learning capability of the visual network. Future direction involves using graphical modeling to understand interactions in the scene. Another direction involves working to understand text in images, which can provide more information about the subjects of the images. Given that the current state of the art in this area generally does not use deep models, this may be a major opportunity for improvement. To this end, we encourage the research community to reach out to Cara Jones, an author of this paper, to obtain a copy of Trafficking-10k and other training data. Acknowledgements We would like to thank William Chargin for creating figures and revising this paper. We would also like to thank Torsten W¨ortwein for his assistance in visualizing our data. Furthermore, we would like to thank our anonymous reviewers for their valuable feedback. Finally, we would like to acknowledge collaborators from Marinus Analytics for the time and effort that they put into annotating advertise1554 ments for the dataset, and for allowing us to use their advertisement data. References Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question answering. In Proceedings of the IEEE International Conference on Computer Vision. pages 2425–2433. Jeffrey Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. 2015. Long-term recurrent convolutional networks for visual recognition and description. In Proceedings of the IEEE conference on computer vision and pattern recognition. pages 2625–2634. Artur Dubrawski, Kyle Miller, Matthew Barnes, Benedikt Boecking, and Emily Kennedy. 2015. Leveraging publicly available data to discern patterns of human-trafficking activity. Journal of Human Trafficking 1(1):65–85. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Aistats. volume 9, pages 249–256. National Human Trafficking Hotline. ???? Hotline statistics. Emily Kennedy. 2012. Predictive patterns of sex trafficking online. Dietrich College Honors Theses . Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of Machine Learning Research 9(Nov):2579–2605. Lauren A McCarthy. 2014. Human trafficking and the new slavery. Annual Review of Law and Social Science 10:221–242. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. pages 3111–3119. Chirag Nagpal, Kyle Miller, Benedikt Boecking, and Artur Dubrawski. 2015. An entity resolution approach to isolate instances of human trafficking online. arXiv preprint arXiv:1509.06659 . Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. GloVe: Global vectors for word representation. In EMNLP. volume 14, pages 1532– 1543. Soujanya Poria, Erik Cambria, Rajiv Bajpai, and Amir Hussain. 2017. A review of affective computing: From unimodal analysis to multimodal fusion. Information Fusion 1:34. Soujanya Poria, Iti Chaturvedi, Erik Cambria, and Amir Hussain. 2016. Convolutional mkl based multimodal emotion recognition and sentiment analysis. In 2016 IEEE 16th International Conference on Data Mining (ICDM). IEEE, pages 439–448. Stefan Scherer, Gale M Lucas, Jonathan Gratch, Albert Skip Rizzo, and Louis-Philippe Morency. 2016. Self-reported symptoms of depression and ptsd are associated with reduced vowel space in screening interviews. IEEE Transactions on Affective Computing 7(1):59–73. Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 . Lucia Specia, Stella Frank, Khalil Sima’an, and Desmond Elliott. 2016. A shared task on multimodal machine translation and crosslingual image description. In Proceedings of the First Conference on Machine Translation, Berlin, Germany. Association for Computational Linguistics. UNODC. 2008. Human trafficking: An overview. Web, New York. http://www.ungift.org/doc/knowledgehub/resourcecentre/GIFT˙Human˙Trafficking˙An˙Overview˙2008.pdf. Michel Valstar, Jonathan Gratch, Bj¨orn Schuller, Fabien Ringeval, Dennis Lalanne, Mercedes Torres Torres, Stefan Scherer, Giota Stratou, Roddy Cowie, and Maja Pantic. 2016. Avec 2016: Depression, mood, and emotion recognition workshop and challenge. In Proceedings of the 6th International Workshop on Audio/Visual Emotion Challenge. ACM, pages 3–10. Verena Venek, Stefan Scherer, Louis-Philippe Morency, Albert Rizzo, and John Pestian. 2016. Adolescent suicidal risk assessment in clinician-patient interaction. IEEE Transactions on Affective Computing . Quanzeng You, Hailin Jin, Zhaowen Wang, Chen Fang, and Jiebo Luo. 2016. Image captioning with semantic attention. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pages 4651–4659. Zhou Yu, Stefen Scherer, David Devault, Jonathan Gratch, Giota Stratou, Louis-Philippe Morency, and Justine Cassell. 2013. Multimodal prediction of psychological disorders: Learning verbal and nonverbal commonalities in adjacency pairs. In Semdial 2013 DialDam: Proceedings of the 17th Workshop on the Semantics and Pragmatics of Dialogue. pages 160– 169. 1555 Amir Zadeh, Rowan Zellers, Eli Pincus, and LouisPhilippe Morency. 2016a. Mosi: Multimodal corpus of sentiment intensity and subjectivity analysis in online opinion videos. arXiv preprint arXiv:1606.06259 . Amir Zadeh, Rowan Zellers, Eli Pincus, and LouisPhilippe Morency. 2016b. Multimodal sentiment intensity analysis in videos: Facial gestures and verbal messages. IEEE Intelligent Systems 31(6):82–88. Andrew Jie Zhou, Jiyun Luo, and Lewis John McGibbney. 2016. Multimedia metadata-based forensics in human trafficking web data. Vanessa Murdock, Charles LA Clarke, Jaap page 10. 1556
2017
142
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1557–1567 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1143 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1557–1567 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1143 MalwareTextDB: A Database for Annotated Malware Articles Swee Kiat Lim and Aldrian Obaja Muis and Wei Lu Singapore University of Technology and Design 8 Somapah Road, Singapore, 487372 [email protected], {aldrian_muis,luwei}@sutd.edu.sg Chen Hui Ong DSO National Laboratories 20 Science Park Drive, Singapore, 118230 [email protected] Abstract Cybersecurity risks and malware threats are becoming increasingly dangerous and common. Despite the severity of the problem, there has been few NLP efforts focused on tackling cybersecurity. In this paper, we discuss the construction of a new database for annotated malware texts. An annotation framework is introduced based around the MAEC vocabulary for defining malware characteristics, along with a database consisting of 39 annotated APT reports with a total of 6,819 sentences. We also use the database to construct models that can potentially help cybersecurity researchers in their data collection and analytics efforts. 1 Introduction In 2010, the malware known as Stuxnet physically damaged centrifuges in Iranian nuclear facilities (Langner, 2011). More recently in 2016, a botnet known as Mirai used infected Internet of Things (IoT) devices to conduct large-scale Distributed Denial of Service (DDoS) attacks and disabled Internet access for millions of users in the US West Coast (US-CERT, 2016). These are only two cases in a long list ranging from ransomeware on personal laptops (Andronio et al., 2015) to taking over control of moving cars (Checkoway et al., 2011). Attacks such as these are likely to become increasingly frequent and dangerous as more devices and facilities become connected and digitized. Recently, cybersecurity defense has also been recognized as one of the “problem areas likely to be important both for advancing AI and for Figure 1: Annotated sentence and sentence fragment from MalwareTextDB. Such annotations provide semantic-level information to the text. its long-run impact on society" (Sutskever et al., 2016). In particular, we feel that natural language processing (NLP) has the potential for substantial contribution in cybersecurity and that this is a critical research area given the urgency and risks involved. There exists a large repository of malwarerelated texts online, such as detailed malware reports by various cybersecurity agencies such as Symantec (DiMaggio, 2015) and Cylance (Gross, 2016) and in various blog posts. Cybersecurity researchers often consume such texts in the process of data collection. However, the sheer volume and diversity of these texts make it difficult for researchers to quickly obtain useful information. A potential application of NLP can be to quickly highlight critical information from these texts, such as the specific actions taken by a certain malware. This can help researchers quickly understand the capabilities of a specific malware and search in other texts for malware with similar capabilities. An immediate problem preventing application of NLP techniques to malware texts is that such 1557 texts are mostly unannotated. This severely limits their use in supervised learning techniques. In light of that, we introduce a database of annotated malware reports for facilitating future NLP work in cybersecurity. To the best of our knowledge, this is the first database consisting of annotated malware reports. It is intended for public release, where we hope to inspire contributions from other research groups and individuals. The main contributions of this paper are: • We initiate a framework for annotating malware reports and annotate 39 Advanced Persistent Threat (APT) reports (containing 6,819 sentences) with attribute labels from the Malware Attribute Enumeration and Characterization (MAEC) vocabulary (Kirillov et al., 2010). • We propose the following tasks, construct models for tackling them, and discuss the challenges: • Classify if a sentence is useful for inferring malware actions and capabilities, • Predict token, relation and attribute labels for a given malware-related text, as defined by the earlier framework, and • Predict a malware’s signatures based only on text describing the malware. 2 Background 2.1 APTnotes The 39 APT reports in this database are sourced from APTnotes, a GitHub repository of publiclyreleased reports related to APT groups (Blanda, 2016). The repository is constantly updated, which means it is a constant source of reports for annotations. While the repository consists of 384 reports (as of writing), we have chosen 39 reports from the year 2014 to initialize the database. 2.2 MAEC The MAEC vocabulary was devised by The MITRE Corporation as a standardized language for describing malware (Kirillov et al., 2010). The MAEC vocabulary is used as a source of labels for our annotations. This will facilitate crossapplications in other projects and ensure relevance in the cybersecurity community. 2.3 Related Work There are datasets available, which are used for more general tasks such as content extraction (Walker et al., 2006) or keyword extraction (Kim et al., 2010). These may appear similar to our dataset. However, a big difference is that we are not performing general-purpose annotation and not all tokens are annotated. Instead, we only annotated tokens relevant to malware capabilities and provide more valuable information by annotating the type of malware capability or action implied. These are important differentiating factors, specifically catered to the cybersecurity domain. While we are not aware of any database catering specifically to malware reports, there are various databases in the cybersecurity domain that provide malware hashes, such as the National Software Reference Library (NSRL) (NIST, 2017; Mead, 2006) and the File Hash Repository (FHR) by the Open Web Application Security Project (OWASP, 2015). Most work on classifying and detecting malware has also been focusing on detecting system calls (Alazab et al., 2010; Briones and Gomez, 2008; Willems et al., 2007; Qiao et al., 2013). More recently, Rieck et al. (2011) has incorporated machine learning techniques for detecting malware, again through system calls. To the best of our knowledge, we are not aware of any work on classifying malware based on analysis of malware reports. By building a model that learns to highlight critical information on malware capabilities, we feel that malware-related texts can become a more accessible source of information and provide a richer form of malware characterization beyond detecting file hashes and system calls. 3 Data Collection We worked together with cybersecurity researchers while choosing the preliminary dataset, to ensure that it is relevant for the cybersecurity community. The factors considered when selecting the dataset include the mention of most current malware threats, the range of author sources, with blog posts and technical security reports, and the range of actor attributions, from several suspected state actors to smaller APT groups. 3.1 Preprocessing After the APT reports have been downloaded in PDF format, the PDFMiner tool (Shinyama, 2004) is used to convert the PDF files into plaintext format. The reports often contain non-sentences, such as the cover page or document header and 1558 footer. We went through these non-sentences manually and subsequently removed them before the annotation. Hence only complete sentences are considered for subsequent steps. 3.2 Annotation The Brat Rapid Annotation Tool (Stenetorp et al., 2012) is used to annotate the reports. The main aim of the annotation is to map important word phrases that describe malware actions and behaviors to the relevant MAEC vocabulary, such as the ones shown in Figure 1. We first extract and enumerate the labels from the MAEC vocabulary, which we call attribute labels. This gives us a total of 444 attribute labels, consisting of 211 ActionName labels, 20 Capability labels, 65 StrategicObjectives labels and 148 TacticalObjectives labels. These labels are elaborated in Section 3.5. There are three main stages to the annotation process. These are cumulative and eventually build up to the annotation of the attribute labels. 3.3 Stage 1 - Token Labels The first stage involves annotating the text with the following token labels, illustrated in Figure 2: Action This refers to an event, such as “registers”, “provides” and “is written”. Subject This refers to the initiator of the Action such as “The dropper” and “This module”. Object This refers to the recipient of the Action such as “itself”, “remote persistent access” and “The ransom note”; it also refers to word phrases that provide elaboration on the Action such as “a service”, “the attacker” and “disk”. Modifier This refers to tokens that link to other word phrases that provide elaboration on the Action such as “as” and “to”. This stage helps to identify word phrases that are relevant to the MAEC vocabulary. Notice that for the last sentence in Figure 2, “The ransom note” is tagged as an Object instead of a Subject. This is because the Action “is written” is not being initiated by “The ransom note”. Instead, the Subject is absent in this sentence. 3.4 Stage 2 - Relation Labels The second stage involves annotating the text with the following relation labels: Figure 2: Examples of annotated sentences. Figure 3: Examples of irrelevant sentences. SubjAction This links an Action with its relevant Subject. ActionObj This links an Action with its relevant Object. ActionMod This links an Action with its relevant Modifier. ModObj This links a Modifier with the Object that provides elaboration. This stage helps to make the links between the labelled tokens explicit, which is important in cases where a single Action has multiple Subjects, Objects or Modifiers. Figure 2 demonstrates how the relation labels are used to link the token labels. 3.5 Stage 3 - Attribute Labels The third stage involves annotating the text with the attribute labels extracted from the MAEC vocabulary. Since the Action is the main indicator of a malware’s action or capability, the attribute labels are annotated onto the Actions tagged in Stage 1. Each Action should have one or more attribute labels. There are four classes of attribute labels: ActionName, Capability, StrategicObjectives and TacticalObjectives. These labels describe different actions and capabilities of the malware. Refer to Appendix A for examples and elaboration. 3.6 Summary The above stages complete the annotation process and is done for each document. There are also sentences that are not annotated at all since they do not provide any indication of malware actions or capabilities, such as the sentences in Figure 3. We call these sentences irrelevant sentences. At the time of writing, the database consists of 39 annotated APT reports with a combined total of 6,819 sentences. Out of the 6,819 sentences, 1559 Figure 4: Two different ways for annotating a sentence, where both seem to be equally satisfactory to a human annotator. In this case, both serve to highlight the malware’s ability to hide its DLL’s functionality. Token Labels Relation Labels Attribute Labels (by label) (by label) (by class) Subj 1,778 SubjAction 2,343 ActionName 982 Obj 4,411 ActionObj 2,713 Capability 2,524 Act 2,975 ActionMod 1,841 StratObj 2,004 Mod 1,819 ModObj 1,808 TactObj 1,592 Total 10,983 Total 8,705 Total 7,102 Table 1: Breakdown of annotation statistics. 2,080 sentences are annotated. Table 1 shows the breakdown of the annotation statistics. 3.7 Annotators’ Challenges We can calculate the Cohen’s Kappa (Cohen, 1960) to quantify the agreement between annotators and to give an estimation of the difficulty of this task for human annotators. Using annotations from pairs of annotators, the Cohen’s Kappa was calculated to be 0.36 for annotation of the Token labels. This relatively low agreement between annotators suggests that this is a rather difficult task. In the following subsections, we discuss some possible reasons that make this annotation task difficult. 3.7.1 Complex Sentence Structures In many cases, there may be no definite way to label the tokens. Figure 4 shows two ways to annotate the same sentence. Both annotations essentially serve to highlight the Gen 2 sub-family’s capability of hiding the DLL’s functionality. The first annotation highlights the method used by the malware to hide the library, i.e., employing the Driver. The second annotation focuses on the malware hiding the library and does not include the method. Also notice that the Modifiers highlighted are different in the two cases, since this depends on the Action highlighted and are hence mutually exclusive. Such cases occur more commonly when the sentences contain complex noun- and verbphrases that can be decomposed in several ways. Repercussions surface later in the experiments described in Section 5.2, specifically in the second point under Discussion. 3.7.2 Large Quantity of Labels Due to the large number (444) of attribute labels, it is challenging for annotators to remember all of the attribute labels. Moreover, some of the attribute labels are subject to interpretation. For instance, should Capability: 005: MalwareCapability-command_and_control be tagged for sentences that mention the location or IP addresses of command and control servers, even though such sentences may not be relevant to the capabilities of the malware? 3.7.3 Specialized Domain Knowledge Required Finally, this task requires specialized cybersecurity domain knowledge from the annotator and the ability to apply such knowledge in a natural language context. For example, given the phrase “load the DLL into memory”, the annotator has to realize that this phrase matches the attribute label ActionName: 119: ProcessMemorymap_library_into_process. The abundance of labels with the many ways that each label can be expressed in natural language makes this task extremely challenging. 4 Proposed Tasks The main goal of creating this database is to aid cybersecurity researchers in parsing malwarerelated texts for important information. To this end, we propose several tasks that build up to this main goal. Task 1 Classify if a sentence is relevant for inferring malware actions and capabilities Task 2 Predict token labels for a given malwarerelated text Task 3 Predict relation labels for a given malware-related text 1560 Task 4 Predict attribute labels for a given malware-related text Task 5 Predict a malware’s signatures based on the text describing the malware and the text’s annotations Task 1 arose from discussions with domain experts where we found that a main challenge for cybersecurity researchers is having to sift out critical sentences from lengthy malware reports and articles. Figure 3 shows sentences describing the political and military background of North Korea in the APT report HPSR SecurityBriefing_Episode16_NorthKorea. Such information is essentially useless for cybersecurity researchers focused on malware actions and capabilities. It will be helpful to build a model that can filter relevant sentences that pertain to malware. Tasks 2 to 4 serve to automate the laborious annotation procedure as described earlier. With sufficient data, we hope that it becomes possible to build an effective model for annotating malwarerelated texts, using the framework and labels we defined earlier. Such a model will help to quickly increase the size of the database, which in turn facilitate other supervised learning tasks. Task 5 explores the possibility of using malware texts and annotations to predict a malware’s signatures. While conventional malware analyzers generate a list of malware signatures based on the malware’s activities in a sandbox, such analysis is often difficult due to restricted distribution of malware samples. In contrast, numerous malware reports are freely available and it will be helpful for cybersecurity researchers if such texts can be used to predict malware signatures instead of having to rely on a limited supply of malware samples. In the following experiments, we construct models for tackling each of these tasks and discuss the performance of our models. 5 Experiments and Results Since the focus of this paper is on the introduction of a new framework and database for annotating malware-related texts, we only use simple algorithms for building the models and leave more complex models for future work. For the following experiments, we use linear support vector machine (SVM) and multinomial Naive Bayes (NB) implementations in the scikitlearn library (Pedregosa et al., 2011). The regularization parameter in SVM and smoothing parameP R F1 SVM 69.7 54.0 60.5 NB 59.5 68.5 63.2 Table 2: Task 1 scores: classifying relevant sentences. ter in NB were tuned (with the values 10−3 to 103 in logarithmic increments) by taking the value that gave the best performance in development set. For experiments where Conditional Random Field (CRF) (Lafferty et al., 2001) is used, we utilized the CRF++ implementation (Kudo, 2005). For scoring the predictions, unless otherwise stated, we use the metrics module in scikit-learn for SVM and NB, as well as the CoNLL2000 conlleval Perl script for CRF1. Also, unless otherwise mentioned, we make use of all 39 annotated documents in the database. The experiments are conducted with a 60%/20%/20% training/development/test split, resulting in 23, 8 and 8 documents in the respective datasets. Each experiment is conducted 5 times with a different random allocation of the dataset splits and we report averaged scores2. Since we focus on building a database, we weigh recall and precision as equally important in the following experiments and hence focus on the F1 score metric. The relative importance of recall against precision will ultimately depend on the downstream tasks. 5.1 Task 1 - Classify sentences relevant to malware We make use of the annotations in our database for this supervised learning task and consider a sentence to be relevant as long as it has an annotated token label. For example, the sentences in Figure 2 will be labeled relevant whereas the sentences in Figure 3 will be labeled irrelevant. A simple bag-of-words model is used to represent each sentence. We then build two models – SVM and NB – for tackling this task. Results: Table 2 shows that while the NB model outperforms the SVM model in terms of F1 score, the performance of both models are still rather low with F1 scores below 70 points. We proceed to discuss possible sources of errors for the models. 1www.cnts.ua.ac.be/conll2000/chunking/output.html 2Note that therefore the averaged F1 may not be the harmonic mean of averaged P and R in the result tables. 1561 Figure 5: An example of a token (“a lure document”) labelled as both Subject and Object. In the first case, it is the recipient of the Action “used”, while in the latter case, it is the initiator of the Action “installed”. Figure 6: Actual and predicted annotations. For predicted annotations, the Entity label replaces the Subject and Object labels. Discussion: We find that there are two main types of misclassified sentences. 1. Sentences describing malware without implying specific actions These sentences often contain malware-specific terms, such as “payload” and “malware” in the following sentence. This file is the main payload of the malware. These sentences are often classified as relevant, probably due to the presence of malware-specific terms. However, such sentences are actually irrelevant because they merely describe the malware but do not indicate specific malware actions or capabilities. 2. Sentences describing attacker actions Such sentences mostly contain the term “attacker” or names of attackers. For instance, the following sentence is incorrectly classified as irrelevant. This is another remote administration tool often used by the Pitty Tiger crew. Such sentences involving the attacker are often irrelevant since the annotations focus on the malware and not the attacker. However, the above sentence implies that the malware is a remote administration tool and hence is a relevant sentence that implies malware capability. 5.2 Task 2 - Predict token labels Task 2 concerns automating Stage 1 for the annotation process described in Section 3.3. Within the annotated database, we find several cases where a single word-phrase may be annotated with both Subject and Object labels (see Figure 5). In order to simplify the model for prediction, we redefine Task 2 as predicting Entity, Action and Modifier labels for word-phrases. The single Entity label is used to replace both Subject and Object labels. Since the labels may extend beyond a single word token, we use the BIO format for indicating the span of the labels (Sang and Veenstra, 1999). We use two approaches for tackling this task: a) CRF is used to train a model for directly predicting token labels, b) A pipeline approach where the NB model from Task 1 is used to filter relevant sentences. A CRF model is then trained to predict token labels for relevant sentences. The CRF model in Approach 1 is trained on the entire training set, whereas the CRF model in Approach 2 is trained only on the gold relevant sentences in the training set. For features in both approaches, we use unigrams and bigrams, part-of-Speech labels from the Stanford POStagger (Toutanova et al., 2003), and Brown clustering features after optimizing the cluster size (Brown et al., 1992). A C++ implementation of the Brown clustering algorithm is 1562 Approach 1 Approach 2 Token Label P R F1 P R F1 Entity 48.8 25.1 32.9 42.8 33.8 37.6 Action 55.2 30.3 38.9 50.8 41.1 45.2 Modifier 55.7 28.4 37.3 48.9 37.4 42.1 Average 51.7 27.0 35.2 45.9 36.3 40.3 Table 3: Task 2 scores: predicting token labels. used (Liang, 2005). The Brown cluster was trained on a larger corpus of APT reports, consisting of 103 APT reports not in the annotated database and the 23 APT reports from the training set. We group together low-frequency words that appear 4 or less times in the set of 126 APT reports into one cluster and during testing we assign new words into this cluster. Results: Table 3 demonstrates that Approach 2 outperforms Approach 1 on most scores. Nevertheless, both approaches still give low performance for tackling Task 2 with F1-scores below 50 points. Discussion: There seem to be three main categories of wrong predictions: 1. Sentences describing attacker actions Such sentences are also a main source of prediction errors in Task 1. Again, most sentences describing attackers are deemed irrelevant and left unannotated because we focus on malware actions rather than human attacker actions. However, these sentences may be annotated in cases where the attacker’s actions imply a malware action or capability. For example, the Figure 6a describes the attackers stealing credentials. This implies that the malware used is capable of stealing and exfiltrating credentials. It may be challenging for the model to distinguish whether such sentences describing attackers should be annotated since a level of inference is required. 2. Sentences containing noun-phrases made up of participial phrases and/or prepositional phrases These sentences contain complex noun-phrases with multiple verbs and prepositions, such as in Figures 6b and 6c. In Figure 6b, “the RCS sample sent to Ahmed” is a noun-phrase annotated as a single Subject/Entity. However, the model decomposes the noun-phrase into the subsidiary noun “the RCS sample” and participial phrase “sent to Ahmed” and further decompose the participial phrase into the constituent words, predictApproach 1 Approach 2 Token Label P R F1 P R F1 Entity 63.6 32.1 42.3 56.5 46.3 50.6 Action 60.2 31.4 41.0 54.6 42.8 47.7 Modifier 56.4 28.1 37.1 50.1 37.1 42.3 Average 62.7 31.8 41.9 55.9 45.3 49.8 Table 4: Task 2 relaxed/token-level scores. Relation Label P R F1 SubjAction 86.3 82.3 84.2 ActionObj 91.6 86.2 88.8 ActionMod 98.5 96.4 97.4 ModObj 98.0 96.7 97.4 Average 89.2 89.4 89.3 Table 5: Task 3 scores: predicting relation labels. ing Action, Modifier and Entity labels for “sent”, “to” and “Ahmed” respectively. There are cases where such decomposition of noun-phrases is correct, such as in Figure 6c. As mentioned in Section 3.7, this is also a challenge for human annotators because there may be several ways to decompose the sentence, many of which serve equally well to highlight certain malware aspects (see Figure 4). Whether such decomposition is correct depends on the information that can be extracted from the decomposition. For instance, the decomposition in Figure 6c implies that the malware can receive remote commands from attackers. In contrast, the decomposition predicted by the model in Figure 6b does not offer any insight into the malware. This is a difficult task that requires recognition of the phrase spans and the ability to decide which level of decomposition is appropriate. 3. Sentences containing noun-phrases made up of determiners and adjectives These sentences contain noun-phrases with determiners and adjectives such as “All the requests” in Figure 6d. In such cases, the model may only predict the Entity label for part of the noun-phrase. This is shown in Figure 6d, where the model predicts the Entity label for “the requests” instead of “All the requests”. Thus, we also consider a relaxed scoring scheme where predictions are scored in token level instead of phrase level (see Table 4). The aim of the relaxed score is to give credit to the model when the span for a predicted label intersects with the span for the actual label, as in Figure 6d. 1563 Figure 7: An example of an entity with multiple parents. In this case, stage two payloads has two parents by ActionObject relations - downloading and executing. 5.3 Task 3 - Predict relation labels Following the prediction of token labels in Task 2, we move on to Task 3 for building a model for predicting relation labels. Due to the low performance of the earlier models for predicting token labels, for this experiment we decided to use the gold token labels as input into the model for predicting relation labels. Nevertheless, the models can still be chained in a pipeline context. The task initially appeared to be similar to a dependency parsing task where the model predicts dependencies between the entities demarcated by the token labels. However, on further inspection, we realized that there are several entities which have more than one parent entity (see Figure 7). As such, we treat the task as a binary classification task, by enumerating all possible pairs of entities and predicting whether there is a relation between each pair. Predicting the relation labels from the token labels seem to be a relatively straightforward task and hence we design a simple rule-based model for the predictions. We tuned the rule-based model on one of the documents (AdversaryIntelligenceReport_DeepPanda_0 (1)) and tested it on the remaining 38 documents. The rules are documented in Appendix B. Results: Table 5 shows the scores from testing the model on the remaining 38 documents. The results from the rule-based model are better than expected, with the average F1-scores exceeding 84 points for all the labels. This shows that the relation labels can be reliably predicted given good predictions of the preceding token labels. Discussion: The excellent performance from the rule-based model suggests that there is a welldefined structure in the relations between the entities. It may be possible to make use of this inherent structure to help improve the results for predicting the token labels. Also, notice that by predicting the SubjAction, ActionObj and ActionMod relations, we are simultaneously classifying the ambiguous Entity labels into specific Subject and Object labels. For instance, Rule 1 predicts a ModObj relation beAttribute Category NB SVM P R F1 P R F1 ActionName 35.2 23.9 28.0 43.9 27.9 33.9 Capability 41.5 39.8 40.6 42.5 41.1 41.8 StrategicObjectives 33.7 24.4 28.3 32.2 23.5 27.2 TacticalObjectives 27.6 17.4 21.1 30.2 18.4 22.7 Table 6: Task 4 scores: predicting attribute labels. tween a Modifier and an Entity, implying that the Entity is an Object, whereas Rule 3 predicts a SubjAction relation between an Entity and an Action, implying that the Entity is a Subject. 5.4 Task 4 - Predict attribute labels A significant obstacle in the prediction of attribute labels is the large number of attribute labels available. More precisely, we discover that many of these attribute labels occur rarely, if not never, in the annotated reports. This results in a severely sparse dataset for training a model. Due to the lack of substantial data, we decide to use token groups instead of entire sentences for predicting attribute labels. Token groups are the set of tokens that are linked to each other via relation labels. We extract the token groups from the gold annotations and then build a model for predicting the attribute labels for each token group. Again, we use a bag-of-words model to represent the token groups while SVM and NB are each used to build a model for predicting attribute labels. Results: Table 6 shows the average scores over 5 runs for the four separate attribute categories. For this task, SVM appears to perform generally better than NB, although much more data seems to be required in order to train a reliable model for predicting attribute labels. The Capability category shows the best performance, which is to be expected, since the Capability attributes occur the most frequently. Discussion: The main challenge for this task is the sparse data and the abundant attribute labels available. In fact, out of the 444 attribute labels, 190 labels do not appear in the database. For the remaining 254 attribute labels that do occur in the database, 92 labels occur less than five times and 50 labels occur only once. With the sparse data 1564 Features Used NB SVM P R F1 P R F1 Text only 58.8 50.8 53.5 49.3 47.0 47.2 Ann. only 64.7 55.0 58.0 62.6 57.2 59.2 Text and Ann. 59.3 50.7 53.6 54.3 51.1 51.6 Table 7: Task 5 scores: predicting malware signatures using text and annotations. available, particularly for rare attribute labels, effective one-shot learning models might have to be designed to tackle this difficult task. 5.5 Task 5 - Predict malware signatures using text and annotations Conventional malware analyzers, such as malwr.com, generate a list of signatures based on the malware’s activities in a sandbox. Examples of such signatures include antisandbox_sleep, which indicates anti-sandbox capabilities or persistence_autorun, which indicates persistence capabilities. If it is possible to build an effective model to predict malware signatures based on natural language texts about the malware, this can help cybersecurity researchers predict signatures of malware samples that are difficult to obtain, using the malware reports freely available online. By analyzing the hashes listed in each APT report, we obtain a list of signatures for the malware discussed in the report. However, we are unable to obtain the signatures for several hashes due to restricted distribution of malware samples. There are 8 APT reports without any obtained signatures, which are subsequently discarded for the following experiments. This leaves us with 31 out of 39 APT reports. The current list of malware signatures from Cuckoo Sandbox3 consists of 378 signature types. However, only 68 signature types have been identified for the malware discussed in the 31 documents. Furthermore, out of these 68 signature types, 57 signature types appear less than 10 times, which we exclude from the experiments. The experiments that follow will focus on predicting the remaining 11 signature types using the 31 documents. The OneVsRestClassifier implementation in scikit-learn is used in the following experiments, since this is a multilabel classification problem. We also use SVM and NB to build two types of 3https://cuckoosandbox.org/ models for comparison. Three separate methods are used to generate features for the task: a) the whole text in each APT report is used as features via a bag-of-words representation, without annotations, b) the gold labels from the annotations are used as features, without the text, and c) both the text and the gold annotations are used, via a concatenation of the two feature vectors. Results: Comparing the first two rows in Table 7, we can see that using the annotations as features significantly improve the results, especially the precision. SVM model also seems to benefit more from the annotations, even outperforming NB in one case. Discussion: The significant increase in precision suggests that the annotations provide a condensed source of features for predicting malware signatures, improving the models’ confidence. We also observe that some signatures seem to benefit more from the annotations, such as persistence_autorun and has_pdb. In particular, persistence_autorun has a direct parallel in attribute labels, which is MalwareCapability-persistence, showing that using MAEC vocabulary as attribute labels is appropriate. 6 Conclusion In this paper, we presented a framework for annotating malware reports. We also introduced a database with 39 annotated APT reports and proposed several new tasks and built models for extracting information from the reports. Finally, we discuss several factors that make these tasks extremely challenging given currently available models. We hope that this paper and the accompanying database serve as a first step towards NLP being applied in cybersecurity and that other researchers will be inspired to contribute to the database and to construct their own datasets and implementations. More details about this database can be found at http://statnlp.org/research/re/. Acknowledgments We would like to thank the anonymous reviewers for their helpful comments. This work is supported by ST Electronics – SUTD Cyber Security Laboratory Project 1 Big Data Security Analytics, and is partly supported by MOE Tier 1 grant SUTDT12015008. 1565 References Mamoun Alazab, Sitalakshmi Venkataraman, and Paul Watters. 2010. Towards Understanding Malware Behaviour by the Extraction of API Calls. In 2010 Second Cybercrime and Trustworthy Computing Workshop. IEEE, November 2009, pages 52–59. https://doi.org/10.1109/CTC.2010.8. Nicoló Andronio, Stefano Zanero, and Federico Maggi. 2015. HelDroid: Dissecting and Detecting Mobile Ransomware, Springer International Publishing, Cham, chapter 18, pages 382–404. https://doi.org/10.1007/978-3-319-26362-5. Kiran Blanda. 2016. APTnotes. https://github.com/ aptnotes/. Ismael Briones and Aitor Gomez. 2008. Graphs, entropy and grid computing: Automatic comparison of malware. In Virus bulletin conference. pages 1–12. Peter F. Brown, Peter V. deSouza, Robert L. Mercer, Vincent J. Della Pietra, and Jenifer C. Lai. 1992. Class-based N-gram Models of Natural Language. Comput. Linguist. 18(4):467–479. http://dl.acm.org/citation.cfm?id=176313.176316. Stephen Checkoway, Damon McCoy, Brian Kantor, Danny Anderson, Hovav Shacham, Stefan Savage, Karl Koscher, Alexei Czeskis, Franziska Roesner, and Tadayoshi Kohno. 2011. Comprehensive Experimental Analyses of Automotive Attack Surfaces. In Proceedings of the 20th USENIX Conference on Security. USENIX Association, Berkeley, CA, USA, SEC’11, pages 6–6. http://dl.acm.org/citation.cfm?id=2028067.2028073. Jacob Cohen. 1960. A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement 20(1):37–46. https://doi.org/10.1177/001316446002000104. Jon DiMaggio. 2015. The Black Vine cyberespionage group. Technical report, Symantec. http://www.symantec.com/content/en/us/ enterprise/media/security_response/whitepapers/ the-black-vine-cyberespionage-group.pdf. Jon Gross. 2016. Operation Dust Storm. Technical report, Cylance. https://www.cylance.com/hubfs/ 2015_cylance_website/assets/operation-dust-storm/ Op_Dust_Storm_Report.pdf. Su Nam Kim, Olena Medelyan, Min-Yen Kan, and Timothy Baldwin. 2010. Semeval-2010 task 5: Automatic keyphrase extraction from scientific articles. In Proceedings of the 5th International Workshop on Semantic Evaluation. Association for Computational Linguistics, Stroudsburg, PA, USA, SemEval ’10, pages 21–26. http://dl.acm.org/citation.cfm?id=1859664.1859668. Ivan Kirillov, Desiree Beck, Penny Chase, and Robert Martin. 2010. Malware Attribute Enumeration and Characterization. The MITRE Corporation, Tech. Rep . Taku Kudo. 2005. CRF++. https://taku910.github.io/ crfpp/. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. In International Conference on Machine Learning (ICML 2001). pages 282–289. http://dl.acm.org/citation.cfm?id=655813. R. Langner. 2011. Stuxnet: Dissecting a Cyberwarfare Weapon. IEEE Security Privacy 9(3):49–51. https://doi.org/10.1109/MSP.2011.67. Percy Liang. 2005. Semi-supervised learning for natural language. Master’s thesis, Massachusetts Institute of Technology. https://doi.org/1721.1/33296. Steve Mead. 2006. Unique file identification in the National Software Reference Library. Digital Investigation 3(3):138 – 150. https://doi.org/10.1016/j.diin.2006.08.010. NIST. 2017. National Software Reference Library. http://www.nsrl.nist.gov/. OWASP. 2015. OWASP File Hash Repository. https://www.owasp.org/index.php/OWASP_File_ Hash_Repository. Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and Édouard Duchesnay. 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research 12:2825–2830. http://dl.acm.org/citation.cfm?id=2078195. Yong Qiao, Jie He, Yuexiang Yang, and Lin Ji. 2013. Analyzing Malware by Abstracting the Frequent Itemsets in API Call Sequences. In 2013 12th IEEE International Conference on Trust, Security and Privacy in Computing and Communications. IEEE, pages 265–270. https://doi.org/10.1109/TrustCom.2013.36. Konrad Rieck, Philipp Trinius, Carsten Willems, and Thorsten Holz. 2011. Automatic analysis of malware behavior using machine learning. Journal of Computer Security 19(4):639–668. https://doi.org/10.3233/JCS-2010-0410. Erik F. Tjong Kim Sang and Jorn Veenstra. 1999. Representing Text Chunks. In Proceedings of the ninth conference on European chapter of the Association for Computational Linguistics. Association for Computational Linguistics, Morristown, NJ, USA, page 173. https://doi.org/10.3115/977035.977059. Yusuke Shinyama. 2004. PDFMiner. https://euske. github.io/pdfminer/. 1566 Pontus Stenetorp, Sampo Pyysalo, Goran Topi´c, Tomoko Ohta, Sophia Ananiadou, and Jun’ichi Tsujii. 2012. BRAT: A Web-based Tool for NLP-assisted Text Annotation. In Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, Stroudsburg, PA, USA, EACL ’12, pages 102–107. http://dl.acm.org/citation.cfm?id=2380921.2380942. Ilya Sutskever, Dario Amodei, and Sam Altman. 2016. Special projects. Technical report, OpenAI, https://openai.com/blog/special-projects/. https://openai.com/blog/special-projects/. Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich Part-of-speech Tagging with a Cyclic Dependency Network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1. Association for Computational Linguistics, Stroudsburg, PA, USA, NAACL ’03, pages 173–180. https://doi.org/10.3115/1073445.1073478. US-CERT. 2016. Heightened DDoS Threat Posed by Mirai and Other Botnets. Technical report, United States Computer Emergency Readiness Team. https://www.us-cert.gov/ncas/alerts/TA16288A. Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. Ace 2005 multilingual training corpus. Linguistic Data Consortium, Philadelphia 57. Carsten Willems, Thorsten Holz, and Felix Freiling. 2007. Toward Automated Dynamic Malware Analysis Using CWSandbox. IEEE Security and Privacy Magazine 5(2):32–39. https://doi.org/10.1109/MSP.2007.45. A Attribute Labels The following elaborates on the types of malware actions described by each class of attribute labels and gives specific examples. A.1 ActionName The ActionName labels describe specific actions taken by the malware, such as downloading a file ActionName: 090: Network- download_ file or creating a registry key ActionName: 135: Registry-create_registry_key. A.2 Capability The Capability labels describe general capabilities of the malware, such as exfiltrating stolen data Capability: 006: MalwareCapabilitydata_exfiltration or spying on the victim Capability: 019: MalwareCapability-spying. A.3 StrategicObjectives The StrategicObjectives labels elaborate on the Capability labels and provide more details on the capabilities of the malware, such as preparing stolen data for exfiltration StrategicObjectives: 021: DataExfiltrationstage_data_for_exfiltration or capturing information from input devices connected to the victim’s machine StrategicObjectives: 061: Spyingcapture_system_input_peripheral_data. Each StrategicObjectives label belongs to a Capability label. A.4 TacticalObjectives The TacticalObjectives labels provide third level of details on the malware’s capability, such as encrypting stolen data for exfiltration TacticalObjectives: 053: DataExfiltration-encrypt_data or an ability to perform key-logging TacticalObjectives: 140: Spying-capture_keyboard_input. Again, each TacticalObjectives label belongs to a Capability label. B Rules for Rule-based Model in Task 3 The following are the rules used in the rule-based model described in Section 5.3. 1. If a Modifier is followed by an Entity, a ModObj relation is predicted between the Modifier and the Entity 2. If an Action is followed by an Entity, an ActionObj relation is predicted between the Action and the Entity 3. If an Entity is followed by an Action of tokenlength 1, a SubjAction relation is predicted between the Entity and the Action 4. An ActionObj relation is predicted between any Action that begins with be and the most recent previous Entity 5. An ActionObj relation is predicted between any Action that begins with is, was, are or were and ends with -ing and the most recent previous Entity 6. An ActionMod relation is predicted between all Modifiers and the most recent previous Action 1567
2017
143
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1568–1578 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1144 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1568–1578 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1144 A Corpus of Annotated Revisions for Studying Argumentative Writing Fan Zhang Homa B. Hashemi Rebecca Hwa Diane Litman University of Pittsburgh Pittsburgh, PA, 15260 {zhangfan,hashemi,hwa,litman}@cs.pitt.edu Abstract This paper presents ArgRewrite, a corpus of between-draft revisions of argumentative essays. Drafts are manually aligned at the sentence level, and the writer’s purpose for each revision is annotated with categories analogous to those used in argument mining and discourse analysis. The corpus should enable advanced research in writing comparison and revision analysis, as demonstrated via our own studies of student revision behavior and of automatic revision purpose prediction. 1 Introduction Most writing-related natural language processing (NLP) research focuses on the analysis of single drafts. Examples include document-level quality assessment (Attali and Burstein, 2006; Burstein and Chodorow, 1999), discourse-level analysis and mining (Burstein et al., 2003; Falakmasir et al., 2014; Persing and Ng, 2016), and fine-grained error detection (Leacock et al., 2010; Grammarly, 2016). Less studied is the analysis of changes between drafts – a comparison of revisions and the properties of the differences. Research on this topic can support applications involing revision analysis (Zhang and Litman, 2015), paraphrase (Malakasiotis and Androutsopoulos, 2011) and correction detection (Swanson and Yamangil, 2012; Xue and Hwa, 2014). Although there are some corpora resources for NLP research on writing comparisons, most tend to be between individual sentences/phrases for tasks such as paraphrase comparison (Dolan and Brockett, 2005; Tan and Lee, 2014) or grammar error correction (Dahlmeier et al., 2013; Yannakoudakis et al., 2011). In terms of revision analysis, the most relevant work analyzes Wikipedia revisions (Daxenberger and Gurevych, 2013; Bronner and Monz, 2012); however, the domain of Wikipedia is so specialized that the properties of Wikipedia revisions do not correspond well with other kinds of texts. This work presents the ArgRewrite corpus1 to facilitate revision analysis research for argumentative essays. The corpus consists of a collection of three drafts of essays written by university students and employees; the drafts are manually aligned at the sentence level, then the purpose of each revision is manually coded using a revision schema closely related to argument mining/discourse analysis. Within the domain of argumentative essays, the corpus will be useful for supporting research in argumentative revision analysis and the application of argument mining techniques. The corpus may also be useful for research on paraphrase comparisons, grammar error correction, and computational stylistics (Popescu and Dinu, 2008; Flekova et al., 2016). In this paper, we present two example uses of our corpus: 1) rewriting behavior data analysis, and 2) automatic revision purpose classification. 2 Corpus Design Decisions Consider this scenario: Alice begins her social science argumentative essay with the sentence “Electronic communication allows people to make connections beyond physical limits.” An analytical system might (rightly) identify the sentence as the thesis of her essay, and an evaluative system might give the essay a low score due to this sentence’s vagueness and a later lack of evidence (though Alice may not know why she received that score). Now suppose in a revised draft, Alice expanded 1The corpus is based on the ArgRewrite system developed in our prior work (Zhang et al., 2016). 1568 the sentence: “Electronic communication allows people to make connections beyond physical limits location and enriches connections that would have been impossible to make otherwise.” An analytical system would still identify the sentence as the thesis, and an evaluative system might raise the overall score a little higher. Alice may become satisfied with the increase and move on. However, there is an opportunity lost – neither the analytical nor the evaluative system addressed the quality of her revision. A revision analysis system might be helpful for Alice because it would link “limits” to “location and ...” and identify the reason why she made the change – perhaps adding precision. If Alice had intended her change as a way to add evidential support for her thesis, she would see that her attempt was not as successful as she hoped. The above scenario highlights the application of a revision analysis system. This paper is about creating a corpus to enable the development of such systems. Because this is a relatively new problem, there are many possible ways for us to design the corpus. Here we discuss some of our decisions. First, we need to define the unit of revision. The example above illustrates a phrase-aligned revision. While this offers a fairly precise definition of the scope of a revision, it may be difficult to achieve consistent annotations. For example, the changes may not adhere to any syntactic linguistic unit. For this first corpus, we define our unit of revision to be at the sentence level. In other words, even if a pair of sentences contains multiple edits, the entire sentence pair will be annotated as one sentence revision. Second, we need to define the quality we want to observe about the revision sentence pair. For this first corpus, we focus on recognizing the purpose of the revision, as in the example above. It is a useful property, and it has previously been studied by others in the literature. People have considered both binary purpose categories such as Content vs. Surface (Faigley and Witte, 1981) or Factual vs. Fluency (Bronner and Monz, 2012) as well as more fine-grained categories (Pfeil et al., 2006; Jones, 2008; Liu and Ram, 2009; Daxenberger and Gurevych, 2012; Zhang and Litman, 2015). Our corpus follows the two-tiered schema used by (Zhang and Litman, 2015) (see Section 3.2). Third, we not only have to decide on the annotation format, we also need to decide how to obtain Write Draft1 @home Draft1 Revise Draft1 @home Draft2 Revise Draft2 @lab Draft3 Annotated Revisions I (Rev12) Annotated Revisions II (Rev23) Figure 1: Our collected corpus contains five components: three drafts of an essay and two annotated revisions between drafts. the raw text: argumentative essays with multiple drafts. We decided to sample from a population of predominantly college students, inclusive of both native and proficient non-native (aka L2) speakers. Comparing to high school students, college students are expected to produce essays having a better organization of the argument elements. Including native and L2 speakers allows for the exploration of possible rewriting differences between writers of varying backgrounds. We decided to give all subjects the same writing prompt and collect three drafts. The identical prompt minimizes the impact of topic difference for argumentationrelated study. The collection of three drafts allows for a comparison of revision differences at different stages of rewriting. Finally, we need a method for eliciting two revised drafts from each writer. Ideally, an instructor would give formative feedback after each draft for each student, but we do not have the resources to carry out such an expensive project. We simulate instructor feedback by asking students to add more examples after the first draft. To elicit a second revised draft, we use two different systems. First, we utilize an idealized2 version of the ArgRewrite revision analysis system (Zhang et al., 2016). ArgRewrite highlights the locations of revisions at the sentence level and colors the revisions differently according to the revision purpose types. Our second system shows a character-based comparison between subsequent essay drafts3. This system is designed to have a similar look as ArgRewrite by highlighting the location of revisions. However, the type of revisions are not provided. 2All automatic revision feedback was manually examined/corrected to guarantee correctness. 3Code derived from https://code.google.com/ p/google-diff-match-patch/ which implements Myers’ algorithm (Myers, 1986). 1569 (a) Interface A. (b) Interface B. Figure 2: Screenshot of the interfaces. (a) Interface A with the annotated revision purposes, (b) Interface B with a streamlined character-based diff. 3 The ArgRewrite Corpus Based on the above design decisions, we have developed a corpus of argumentative essays with three drafts and detailed annotations for sentencealigned revisions between each consecutive pair of drafts. The main corpus has five elements, with the relationships between them shown in Figure 1; Section 3.1 describes the procedure for obtaining them. Section 3.2 briefly describes the revision schema we used and reports the inter-annotator agreement. Additionally, we have collected metadata from the participants who contributed to the corpus (discussed in Section 3.3); these data may be useful for user behavior analysis. 3.1 Corpus Development Procedure We have recruited 60 participants aged 18 years and older, among whom 40 were English native speakers and 20 were non-native speakers with sufficient English proficiency.4 The study to collect the corpus is carried out in three 40-60 minute sessions over the duration of two weeks. Draft1 Each participant begins by completing a pre-study questionnaire (Section 3.3) and writing a short essay online. Participants are instructed to keep the essay around 400 words, making a single main point with two supporting examples. They are given the following prompt: “Suppose you’ve been asked to contribute a short op-ed piece for The New York Times. Argue whether the proliferation of electronic 4i.e., with a TOEFL score higher than 100. communications (e.g., email, text or other social media) enriches or hinders the development of interpersonal relationships.” Draft2 A few days later, participants are asked to revise their first draft online based on the following feedback: Strengthen the essay by adding one more example or reasoning for the claim; then add a rebuttal to an opposing idea; keep the essay at 400 words. With this feedback we try to push participants to make revisions for later processing by the two interfaces used to create Draft3. Annotated Revisions I (Rev12) The two drafts are semi-manually aligned at the sentence level.5 Then, the purpose of each pair of sentence revision is manually coded by a trained annotator, following the annotation guideline (see Section 3.2). Draft3 Participants write their third draft in a lab environment. This time, they are not given additional instructional feedback. Instead, participants are shown a computer interface that highlights the differences between their first and second drafts. They are asked to revise and create a third draft to improve the general quality of their essay. We experimented with two variations of revision elicitation. Chosen at random, half of the participants (10 L2 participants and 20 Native participants) are shown Interface A, the interface based on the ArgRewrite system (Zhang et al., 2016), which highlights the annotated differences between the drafts (Figure 2(a)); half of the participants are shown In5Sentences are first automatically aligned (Zhang and Litman, 2014), then manually corrected by human. 1570 Draft1 Revision Purpose Draft2 Revision Purpose Draft3 This world has no restriction on who one can talk to. Conventions/ Grammar/ Spelling This world has no restrictions on whom one can talk to. This world has no restrictions on whom one can talk to. Rebuttal/ Reservation Unfortunately, the younger users of digital communication cannot be entirely protected from the rhetoric of any outsider. Warrant/ Reasoning/ Backing Modern society is now faced with the issue of cyber bullying as a result. The only aspects of communication that this new development improves are internet navigation and faux internet relatability. WordUsage/ Clarity The only aspects of digital communication that this new development improves are internet navigation and faux internet relatability. WordUsage/ Clarity The only aspects of digital communication that this new development improves are internet navigation and faux internet relationships. Claims/ Ideas Being immersed in the sphere of new technologies can allow for complete isolation from the active, non-digital world. Being immersed in the sphere of new technologies can allow for complete isolation from the active, non-digital world. Table 1: Examples from the annotated corpus. The sentences were aligned across the drafts and the revision purposes were labeled on the aligned sentence pairs. From Draft1 to Draft2, there are two Modify revisions (Spelling and Clarity) and one Add revision. From Draft2 to Draft3, there are two Add revisions (Rebuttal and Reasoning) and one Modify revision (Clarity). terface B, a streamlined character-based diff (Figure 2(b)). In Interface A, some purposes were renamed from the original annotation categories to help the participants better understand the system (as detailed in Table 2)6. Both interface groups are asked to read a tutorial about their respective interfaces before beginning to revise. Participants in group A are also asked to verify the manually annotated revision purposes between their first and second drafts. This information is collected to investigate the impact of the difference between the system’s recognized and the participant’s intended purpose. After completing the final revision, all participants are given a post-study survey about their experiences (Section 3.3). Additionally, participants in group A are asked to verify the automatically predicted revision purposes between their second and third drafts (Section 4.2). Annotated Revisions II (Rev23) Regardless of which interface the participants used, the second and third draft are compared and annotated by the trained annotator in the same process as before. 6Figure 2(a) has two additional categories. Precision was intended to represent revisions that make a sentence more precise. Unknown was intended to represent revisions that cannot be categorized to existing categories. These two categories were not used during annotation as they were reported to be confusing in our pilot studies. 3.2 Revision Annotation Guidelines Following our prior corpus annotations (Zhang and Litman, 2015), sentence revisions are first coarsely categorized as Surface or Content changes (Faigley and Witte, 1981), depending on whether any informational content was modified; within each coarse category, we distinguish between several finer categories based on the argumentative and discourse writing literature (Kneupper, 1978; Faigley and Witte, 1981; Burstein et al., 2003). Our adapted schema has three Surface categories (Organization, Word Usage/Clarity, and Conventions/Grammar/Spelling) and five Content categories (Claim/Ideas, Warrant/Reasoning/Backing, Rebuttal/Reservation, Evidence, and General Content Development). Table 1 shows example aligned sentences in three collected drafts and their annotated revision categories. The edit types of revisions (Add, Delete and Modify) are decided according to the alignment of sentences. Two annotators (one is experienced, and the other is newly trained) participated in data annotation. The annotators first went through a “training” phase where both annotators annotated 5 files and discussed their disagreements to resolve misunderstandings. Then, both annotators separately annotated 10 new files and Kappa was calculated 1571 Name in Schema Name in System Definition Content Content revisions that changed the information of essay Claims/Ideas Ideas revisions that aimed to change the thesis of essay Warrant/Reasoning/Backing Reasoning revisions that aimed to change the reasoning of thesis Rebuttal/Reservation Rebuttal revisions that aimed to change the rebuttal of thesis Evidence Evidence revisions that aimed to change the evidence support for thesis General Content Other other types of content revisions Surface Surface revisions that did not change the information of essay Organization Reordering revisions that switched the order of sentences Word Usage/Clarity Fluency revisions that aimed to make the essay more fluent Conventions/Grammar/Spelling Errors revisions that aimed to fix the spelling/grammar mistakes Table 2: Definition of category names in Interface A. L2 (20) Draft1 Draft2 Draft3 Avg #Words 379.1 412.8 484.7 Avg #Sentences 18.6 20.2 23.7 Avg #Paragraphs 3.9 4.5 4.8 Native (40) Draft1 Draft2 Draft3 Avg #Words 372.4 394.7 531.6 Avg #Sentences 18.8 20.4 25.8 Avg #Paragraphs 4.0 4.7 5.1 Table 3: Descriptive statistics of the ArgRewrite Corpus, including average number of words, sentences and paragraphs per essay draft. on the annotation of these 10 new files. The Kappa on this held-out data is 0.84 on the two coarse categories of Surface vs. Content and 0.71 on the eight fine-grained categories that appear in Table 2. The disagreements between annotators were removed after discussion and the final labels were used as the gold standard annotation. 3.3 Meta-Data In addition to the raw text and annotations, the corpus release includes participant meta-data from both a pre-study and a post-study survey. Pre-Study Survey The pre-study survey asks for participants’ demographic information as well as their self-reported writing background, such as participants’ confidence in their writing ability, the number of drafts they typically make, etc. The questions are listed in Appendix A. Post-Study Survey The post-study survey contains questions about the participants’ in-lab revision experience, such as whether they found the computer interface helpful. All questions are answered on a scale of 1 to 5, ranging from “strongly disagree” to “strongly agree”. Details of questions are shown in Appendix B. 3.4 Descriptive Statistics Table 3 indicates the average number of words/sentences/paragraphs per essay draft. The corpus includes 180 essays: 120 (Draft1 and Draft2) with an average of about 400 words and 60 (Draft3) with an average of around 500 words. Among the 40 native speakers, there were 29 (72.5%) undergraduates, 6 (15%) graduate students, and 5 (12.5%) non-students (post-docs and lecturers). Among the 20 L2 speakers, there were 4 (20%) undergraduates, and 16 (80%) graduate students; there were 9 Chinese, 2 Bengali, 2 Marathi, 2 Persian, 1 Arabic, 1 Korean, 1 Portuguese, 1 Spanish, and 1 Tamil. In terms of discipline, 33 participants (55%) were from the natural sciences, 24 (40%) from the social sciences, and 2 (3.3%) from the humanities. 1 participant (1.7%) did not reveal his/her discipline. 3.5 Public Release The corpus is freely available for research usage7. The first release includes the raw text plus the revision annotations and the meta-data. The revision annotations are stored as .xlsx files. There are 60 spreadsheet files for revisions from Draft1 to Draft2 and 60 more spreadsheet files for revisions from Draft2 to Draft3. Each spreadsheet file contains two sheets: Old Draft and New Draft. Each row in the sheet represents one sentence in the corresponding draft. The index of the aligned sentence row in the other draft and the type of the revision on the sentence are recorded. The metadata are in .log text files. Information in the text files are stored using the JSON data format. 4 Example Uses of the Corpus While the development of a full fledged revision analysis system is outside the scope of this work, we demonstrate potential uses of our corpus with two examples. We first perform statistical analyses on the collected revision data and meta-data 7http://argrewrite.cs.pitt.edu 1572 Content Surface Rev12 Rev23 Rev12 Rev23 L2 (20) 172 78 163 176 Interface A 91 37 71 85 Interface B 81 41 92 91 Native (40) 334 285 303 246 Interface A 177 154 149 111 Interface B 157 131 154 135 Table 4: Number of revisions, by participant groups (language, interface), coarse-grain purposes, and revision drafts (Rev12 is between Draft1-Draft2; Rev23 is between Draft2-Draft3). to understand aspects of participant behavior. We also use the corpus to train a supervised classifier to automatically predict revision purposes. 4.1 Student Revision Behavior Analysis While it is well-established that thoughtful revisions improve one’s writing, and while many college-level courses require students to submit multiple drafts of writing assignments (Addison and McGee, 2010), instructors rarely monitor and provide feedback to students while they revise. This is partly due to instructors’ time constraints and partly due to their uncertainty about how to support students’ revisions (Cole, 2014; Melzer, 2014). There is much we do not know about how to stimulate students to self-reflect and revise. 4.1.1 Hypotheses Using the ArgRewrite Corpus, we can begin to ask and address some questions about revision habits and behaviors. Our first question is: How do different types of revision feedback impact student revision? And relatedly: Does student background (e.g., native vs. L2) make a difference? We thus mine the corpus to test the following hypotheses: H1. There is a difference in participants’ revising behaviors depending on which interface is used to elicit the third draft. H2. For participants who used Interface A, if the recognized revision purpose differs from the participants’ intended revision purpose, participants will further modify their revision. H3. L2 and native speakers have different behaviors in making revisions. H1 and H2 address the first question; H3 addresses the second. 4.1.2 Methodology To test the hypotheses, we will use both subjective and objective measures. Subjective measures are based on participant post-study survey answers. Ideally, objective measures should be based on an assessment of improvements in the revised drafts; since we do not have evaluative data at this time, we approximate the degree of improvement using the number of revisions, since these two quantities were demonstrated to be positively correlated (Zhang and Litman, 2015). The objective measures are computed from Tables 4 and 5. To compare differences between specific subgroups on the subjective and objective measures, we conduct ANOVA tests with two factors. There are multiple factors that can influence the users’ rewriting behaviors such as the user’s native language, education level and previous revision behaviors, etc. In our study, we try to explore the difference between interface groups considering one of the most salient confounding factors: language. We use one factor as the participant’s native language (whether the participant is native or L2) and the other factor as the interface used. To determine correlation between quantitative measures, we conduct Spearman (ρ) and Pearson (r) correlation tests. 4.1.3 Results and Discussion Testing for H1 Comparing Group A and Group B participants, we observe some differences. First, we detect that Group A agrees with the statement “The system helps me to recognize the weakness of my essay” more so than Group B (Group A has a mean rating of 3.97 (“Agree”) while Group B’s is 3.17 (“Neutral”), p < .003). Second, in Group A, there is a trending positive correlation between the number of revisions8 from Draft2 to Draft3 and the ratings for the statement “The system encourages me to make more revisions than I usually make” (ρ=.33 and p < .07); whereas there is no such correlation for Group B. Additional information about revision purposes may elicit a stronger self-reflection response in Group A participants. In contrast, in Group B, there is a significant negative correlation between the number of Rev12 and ratings for the statement “it is convenient to view my previous revisions with the system” (ρ=-.36 and p < .05). This suggests that the character-based interface is ineffective when participants have to reflect on many changes. 8The results reported are the normalized numbers #revisions #sentences, where #sentences represents the number of sentences in the draft before revision. The absolute numbers were also tested and similar findings were observed. 1573 Revision Purpose Draft1 to Draft2 Draft2 to Draft3 Totals #Add #Delete #Modify #Add #Delete #Modify Content 294 179 33 320 27 16 869 Claims/Ideas 25 8 4 5 0 0 42 Warrant/Reasoning/Backing 166 83 7 191 13 3 463 Rebuttal/Reservation 23 1 0 13 0 0 37 General Content 50 80 18 86 13 13 260 Evidence 30 7 4 25 1 0 67 Surface 0 0 466 0 0 422 888 Word Usage/Clarity 0 0 362 0 0 357 719 Conventions/Grammar/Spelling 0 0 75 0 0 52 127 Organization 0 0 29 0 0 13 42 Table 5: Number of revisions, by fine-grain revision purposes and edit types (add, delete, modify). On the other hand, when comparing the number of revisions made by Group A and Group B on Rev23 (controlling for their Rev12 numbers), we did not find a significant difference. As we did not observe a significant difference in the number of revisions made by the two interface groups, we cannot verify that H1 is true; possibly a larger pool of participants is needed, or possibly the writing assignment is not extensive enough (in length and in the number of drafts). Another possible explanation is that the system might only motivate the users to make more revisions when the feedback is different from the user’s intention. To further verify the correctness of H1, we plan to have the essays graded by experts. The graded scores could allow us to analyze whether essays improved more when Interface A was used. Testing for H2 Focusing on the 30 participants from Group A, we check the impact of the feedback regarding Rev12 on how they subsequently revise (Rev23). We counted the Add and Modify revisions where the participant disagrees with the revision purpose assigned by the annotator in Rev12. Of those, we then count the number of times the corresponding sentences were further revised9. Of the 53 sentences where the participants disagreed with the annotator, 45 were further revised in the third draft. The ratio is 0.849, much higher than the overall ratio of general Rev12 revisions being further revised in Rev23 (161/394 = 0.409) and the ratio of the agreed Rev12 revisions being revised in Rev23 (67/341 = 0.196). In further analysis, a Pearson correlation test is conducted to check the correlation between the number of Rev23 and the number of disagreements for different types of agreement/disagreements, controlling for the number of Rev12. We find a nega9Delete revisions were ignored as the deleted sentences are not traceable in Draft3 tive correlation between Rev23 and the number of cases (r=-0.41, p < .03) in which the revisions annotated as Content are verified by the participants; we also find a positive correlation between Rev23 and the number of cases (r=0.36, p <= .05) in which the revisions annotated as Surface are intended to be Content revisions by the participants. Both findings are consistent with H2, suggesting that participants will revise further if they perceive that their intended revisions were not recognized. Testing for H3 We observe that native and L2 speakers exhibit different behaviors. First, we tested the difference in Content23 and Surface2310 between these speaker groups with ANOVA. We observe significant difference in the number of content (p < .02) and surface (p < .03) revisions made by L2 and native speakers. More specifically, our native speakers make more Content changes while the L2 speakers make more Surface changes. Second, with ANOVA we found a significant interaction effect of the two factors (Group and users’ L2 or native status) (p < .021) on their ratings for the statement “the system helps me to recognize the weakness of my essay” with L2 speakers having a stronger Interface A preference. Third, we observe a significant positive correlation in the native group between the number of content revisions in Rev23 and the ratings of the statement “the system encourages me to make more revisions than I usually make” (ρ=.4 and p < .009). This suggests that giving feedback (from either interface) encourages native speakers to make more content revisions. Finally, in the L2 group, there is a significant negative correlation between the number of surface revisions in Rev12 and the ratings for the statement “the system helps me to recognize the weakness of my es10content/surface revisions from Draft2 to Draft3 1574 say” (ρ=-.57 and p < .008). This shows that giving feedback to L2 speakers is less helpful when they make more surface revisions. These results are consistent with H3. Summary Our findings suggest that feedback on revisions do impact how students review and rewrite their drafts. However, there are many factors at play, including the interface design and the students’ linguistic backgrounds. 4.2 Automatic Revision Identification Another use of the corpus is to serve as a gold standard for training and testing a revision purpose prediction component for use in an automatic revision analysis system. In the version of ArgRewrite evaluated earlier (Interface A), the manual annotation of revision purposes enabled the system to provide revision feedback to users, which motivated them to improve their writing (H2). Automatic argumentative revision purpose prediction has been previously investigated by Zhang and Litman (2015). They developed and reported the performance of a binary classifier for each individual revision category (1 for revisions of the category and 0 for the rest of all revisions) using features from prior research. The availability of our corpus makes it possible for researchers to replicate such methods and conduct further studies. 4.2.1 Hypotheses In this paper, we repeat the experiment of Zhang and Litman (2015) under different settings to investigate three new hypotheses that can now be investigated given the features of our corpus: H4. The method used in Zhang and Litman (2015) for high school writings is also useful for the writings of college students. H5. The same revision classification method works differently for first revision attempts and second revision attempts. H6. The revision classification model trained on L2 essays has a different preference from the model trained on native essays. 4.2.2 Methodology We followed the work of (Zhang and Litman, 2015), where unigram features (words) were used as the baseline and the SVM classifier was used. Besides unigrams, three groups of features used in revision analysis, argument mining and discourse analysis research were extracted (Location, Textual and Language) as in Table 6 (Bronner and Monz, 2012; Daxenberger and Gurevych, 2013; Burstein et al., 2001; Falakmasir et al., 2014). For H4, 10-fold (participant) cross-validation is conducted on all the essays in the corpus. Unweighted average F-score for each revision category is reported, using unigram features versus using all features. Zhang and Litman (2015) observed a significant improvement over the unigram baseline using all the features. If H4 is true, we should expect a similar improvement over the unigram baseline using our corpus. For H5, 10-fold cross-validation was conducted for the revisions from Draft1 to Draft2 and revisions from Draft2 to Draft3 separately. We compared the improvement ratio brought by the advanced features over the unigram baseline. For H6, we trained two classifiers separately with L2 and native essays with all the features. 20 native participants were first randomly selected as the test data. Afterwards classifiers were trained separately using the 20 L2 participants’ essays and the remaining 20 native participants’ essays. We would expect that the performance of the two trained classifiers is different on the same test data. 4.2.3 Results and Discussion The first two rows of Table 7 support H4. We observe that the method (SVM + all features) used in Zhang and Litman (2015) significantly improves performance (compared to a unigram baseline) for half of the classification tasks, which is similar to Zhang and Litman’s results on high school (primarily L1) writing. In our corpus, performance on Claim, Evidence, Rebuttal and Organization was not significantly better than the baseline, possibly due to the limited number of positive training samples for these categories (Table 5). For example, one reason that the performance in Table 7 for Evidence might be low is that there are less than 100 Evidence instances in Table 5. For H5, the four rows in the middle of Table 7 show the difference of the cross-validation results on first attempt revisions and second attempt revisions. The earlier results using all the revisions, versus now just using only Rev12 or Rev23 revisions are similar, which provides little support for H5. With one exception, the features proposed in Zhang and Litman (2015) could again significantly improve the performance over the unigram baseline, for the same set of categories as when using all the revisions. However, for the Conventions/Grammar/Spelling category, we did not ob1575 Group Illustration Location The location of revised sentences in the paragraph/essay (e.g., whether the sentence is the first or last sentence of the paragraph/essay, the index of the sentence in the paragraph) Textual The textual features of revised sentences (e.g., whether the sentence contains a named entity, certain discourse markers (“because”, “due to”, etc), sentence difference (edit distance, difference in punctuations, etc.) and edit types (Add, Delete or Modify)) Language The language features of revised sentences (e.g., difference in POS tags, spelling/grammar mistakes) Table 6: Illustration of features used in the revision classification study. Experiments Text-based Surface Claim Warrant General Evidence Rebuttal Org. Word Conv 10fold + All Revs + Unigram 0.49 0.58 0.48 0.49 0.49 0.49 0.73 0.49 10fold + All Revs + All features 0.49 0.77∗ 0.55∗ 0.50 0.49 0.49 0.86∗ 0.62∗ 10fold + Rev12 + Unigram 0.50 0.58 0.47 0.50 0.50 0.50 0.57 0.62 10fold + Rev12 + All features 0.50 0.77∗ 0.56∗ 0.50 0.50 0.50 0.72∗ 0.72∗ 10fold + Rev23 + Unigram 0.50 0.46 0.53 0.49 0.50 0.50 0.58 0.46 10fold + Rev23 + All features 0.50 0.60∗ 0.65∗ 0.49 0.50 0.50 0.78∗ 0.50 20 L2 (train) + 20 Native (test) 0.50 0.72 0.48 0.49 0.50 0.50 0.83 0.63 20 Native (train) + 20 Native (test) 0.50 0.76 0.52 0.49 0.50 0.50 0.89 0.54 Table 7: Average unweighted F-score for each binary classification task. The first 6 rows show the average value of 10-fold cross-validation. ∗indicates significantly better than unigram baseline (p < .05). The last 2 rows show the F-value for training on L2/Native data and testing on Native data. Bold indicates larger than the number in the other row. serve a significant improvement for revisions from Draft2 and Draft3. A possible explanation is that there is a bigger difference in the writers’ rewriting behavior from Draft2 to Draft3, which increases the difficulty of prediction. The last two rows of Table 7 support H6. Interestingly, we observe a better performance on Warrant, General and Word Usage/Clarity with a classifier trained and tested using native essays. Perhaps essays of native speakers are more similar to each other when revised along these dimensions. For Conventions/Grammar/Spelling, in contrast, the classifier trained on L2 data yields a better performance on the same native test set. This may be because the L2 revisions usually include more spelling/grammar corrections. 5 Conclusion and Future Work We have presented a new corpus for writing comparison research. Currently the corpus focuses on essay revisions made by both native and L2 college students. In addition to three drafts of essays, we have analyzed the drafts to align semantically similar sentences and to assign revision purposes for each revised aligned sentence pair. We have also conducted two studies to demonstrate the use of the corpus for revision behavior analysis and for automatic revision purpose classification. While in this paper we explored language as one factor influencing rewriting behavior, our corpus also contains information about other potential factors such as gender and education level which we plan to investigate in the future. We also plan to augment the corpus to support additional types of research on revision analysis. Some potential augmentations include more fine-grained revision categories, revision properties such as statement strength (Tan and Lee, 2014) and quality evaluations, and sub-sentential revision scopes. Acknowledgments We want to thank Amanda Godley, Geeta Kothari, and the members of the ArgRewrite group (Reed Armstrong, Nicolo Manfredi and Tazin Afrin) for their helpful feedback and the anonymous reviewers for their suggestions. We also want to thank Adam Hobaugh, Dennis Wakefield and Anthony M Taliani for their assistance in the set up of study environments. This material is based upon work supported by the National Science Foundation under Grant No. 1550635. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. This research is also funded by the Learning Research and Development Center of the University of Pittsburgh. 1576 References Joanne Addison and Sharon James McGee. 2010. Writing in high school/writing in college: Research trends and future directions. College Composition and Communication pages 147–179. Yigal Attali and Jill Burstein. 2006. Automated essay scoring with e-rater R⃝v. 2. The Journal of Technology, Learning and Assessment 4(3). Amit Bronner and Christof Monz. 2012. User edits classification using document revision histories. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics. pages 356–366. Jill Burstein and Martin Chodorow. 1999. Automated essay scoring for nonnative English speakers. In Proceedings of a Symposium on Computer Mediated Language Assessment and Evaluation in Natural Language Processing. pages 68–75. Jill Burstein, Daniel Marcu, Slava Andreyev, and Martin Chodorow. 2001. Towards automatic classification of discourse elements in essays. In Proceedings of the 39th annual Meeting on Association for Computational Linguistics. pages 98–105. Jill Burstein, Daniel Marcu, and Kevin Knight. 2003. Finding the WRITE stuff: Automatic identification of discourse structure in student essays. IEEE Intelligent Systems 18(1):32–39. Daniel Cole. 2014. What if the earth is flat? working with, not against, faculty concerns about grammar in student writing. The WAC Journal 25:7–35. Daniel Dahlmeier, Hwee Tou Ng, and Siew Mei Wu. 2013. Building a large annotated corpus of learner English: The NUS corpus of learner English. In Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications. pages 22–31. Johannes Daxenberger and Iryna Gurevych. 2012. A corpus-based study of edit categories in featured and non-featured Wikipedia articles. In COLING. pages 711–726. Johannes Daxenberger and Iryna Gurevych. 2013. Automatically classifying edit categories in Wikipedia revisions. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. pages 578–589. William B Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proc. of IWP. Lester Faigley and Stephen Witte. 1981. Analyzing revision. College composition and communication pages 400–414. Mohammad Hassan Falakmasir, Kevin D Ashley, Christian D Schunn, and Diane J Litman. 2014. Identifying thesis and conclusion statements in student essays to scaffold peer review. In International Conference on Intelligent Tutoring Systems. pages 254–259. Lucie Flekova, Daniel Preot¸iuc-Pietro, and Lyle Ungar. 2016. Exploring stylistic variation with age and income on Twitter. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Short Papers). pages 313–319. Grammarly. 2016. Grammarly. http://www.grammarly.com. [Online; accessed 04-10-2017]. John Jones. 2008. Patterns of revision in online writing a study of Wikipedia’s featured articles. Written Communication 25(2):262–289. Charles W Kneupper. 1978. Teaching argument: An introduction to the Toulmin model. College Composition and Communication 29(3):237–241. Claudia Leacock, Martin Chodorow, Michael Gamon, and Joel Tetreault. 2010. Automated grammatical error detection for language learners. Synthesis lectures on human language technologies 3(1):1–134. Jun Liu and Sudha Ram. 2009. Who does what: Collaboration patterns in the Wikipedia and their impact on data quality. In 19th Workshop on Information Technologies and Systems. pages 175–180. Prodromos Malakasiotis and Ion Androutsopoulos. 2011. A generate and rank approach to sentence paraphrasing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. pages 96–106. Dan Melzer. 2014. The connected curriculum: Designing a vertical transfer writing curriculum. The WAC Journal 25:78–91. Eugene W Myers. 1986. An O(ND) difference algorithm and its variations. Algorithmica 1(1-4):251– 266. Isaac Persing and Vincent Ng. 2016. End-to-end argumentation mining in student essays. In Proceedings of NAACL-HLT. pages 1384–1394. Ulrike Pfeil, Panayiotis Zaphiris, and Chee Siang Ang. 2006. Cultural differences in collaborative authoring of Wikipedia. Journal of Computer-Mediated Communication 12(1):88–113. Marius Popescu and Liviu P. Dinu. 2008. Rank distance as a stylistic similarity. In Coling 2008. pages 91–94. Ben Swanson and Elif Yamangil. 2012. Correction detection and error type selection as an ESL educational aid. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pages 357–361. 1577 Chenhao Tan and Lillian Lee. 2014. A corpus of sentence-level revisions in academic writing: A step towards understanding statement strength in communication. In Proceedings of ACL (short paper). Huichao Xue and Rebecca Hwa. 2014. Improved correction detection in revised ESL sentences. In ACL (2). pages 599–604. Helen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. A new dataset and method for automatically grading ESOL texts. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1. pages 180–189. Fan Zhang, Rebecca Hwa, Diane Litman, and Homa B Hashemi. 2016. Argrewrite: A web-based revision assistant for argumentative writings. NAACL HLT 2016 page 37. Fan Zhang and Diane Litman. 2014. Sentence-level rewriting detection. In Proceedings of the Ninth Workshop on Innovative Use of NLP for Building Educational Applications. pages 149–154. Fan Zhang and Diane Litman. 2015. Annotation and classification of argumentative writing revisions. In Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications. pages 133–143. A Questions of the pre-study survey 1. Is English your native language? 2. (only L2 participants) What is your native language? 3. What is your major? Please select the closest discipline to your major. • Natural sciences • Social sciences • Humanities 4. Are you an undergraduate or graduate student? 5. What is your current year of study? 6. When writing a paper for a class, how many drafts of major revisions do you typically make? 7. Overall, how confident are you with your writings? (Not at all confident, Not very confident, Somewhat confident, confident, Extremely confident) 8. (only L2 participants) Please tell us how comfortable you feel about writing in the English language versus writing in your primary language. (Not at all comfortable, Not very comfortable, Somewhat comfortable, comfortable, Extremely comfortable) 9. What are some of your recent classes that have an intensive writing component to them? How did you do in these classes? 10. What aspects of writing do you think you are good at? e.g. vocabulary choice, clear sentences, writing organization. 11. What aspects of writing do you think you can improve? B Questions of the post-study survey 1. The system allows me to have a better understanding of my previous revision efforts. 2. It is convenient to view my previous revisions with the system. 3. The system helps me to recognize the weakness of my essay. 4. The system encourages me to make more revisions than I usually make. 5. The system encourages me to think more about making more meaningful changes. 6. Overall the system is helpful to my writing. 1578
2017
144
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1579–1590 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1145 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1579–1590 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1145 Watset: Automatic Induction of Synsets from a Graph of Synonyms Dmitry Ustalov†∗, Alexander Panchenko‡, and Chris Biemann‡ †Institute of Natural Sciences and Mathematics, Ural Federal University, Russia ∗Krasovskii Institute of Mathematics and Mechanics, Russia ‡Language Technology Group, Department of Informatics, Universit¨at Hamburg, Germany [email protected] {panchenko,biemann}@informatik.uni-hamburg.de Abstract This paper presents a new graph-based approach that induces synsets using synonymy dictionaries and word embeddings. First, we build a weighted graph of synonyms extracted from commonly available resources, such as Wiktionary. Second, we apply word sense induction to deal with ambiguous words. Finally, we cluster the disambiguated version of the ambiguous input graph into synsets. Our meta-clustering approach lets us use an efficient hard clustering algorithm to perform a fuzzy clustering of the graph. Despite its simplicity, our approach shows excellent results, outperforming five competitive state-of-the-art methods in terms of F-score on three gold standard datasets for English and Russian derived from large-scale manually constructed lexical resources. 1 Introduction A synset is a set of mutual synonyms, which can be represented as a clique graph where nodes are words and edges are synonymy relations. Synsets represent word senses and are building blocks of WordNet (Miller, 1995) and similar resources such as thesauri and lexical ontologies. These resources are crucial for many natural language processing applications that require common sense reasoning, such as information retrieval (Gong et al., 2005) and question answering (Kwok et al., 2001; Zhou et al., 2013). However, for most languages, no manually-constructed resource is available that is comparable to the English WordNet in terms of coverage and quality. For instance, Kiselev et al. (2015) present a comparative analysis of lexical resources available for the Russian language concluding that there is no resource compared to WordNet in terms of coverage and quality for Russian. This lack of linguistic resources for many languages urges the development of new methods for automatic construction of WordNetlike resources. The automatic methods foster construction and use of the new lexical resources. Wikipedia1, Wiktionary2, OmegaWiki3 and other collaboratively-created resources contain a large amount of lexical semantic information— yet designed to be human-readable and not formally structured. While semantic relations can be automatically extracted using tools such as DKPro JWKTL4 and Wikokit5, words in these relations are not disambiguated. For instance, the synonymy pairs (bank, streambank) and (bank, banking company) will be connected via the word “bank”, while they refer to the different senses. This problem stems from the fact that articles in Wiktionary and similar resources list undisambiguated synonyms. They are easy to disambiguate for humans while reading a dictionary article, but can be a source of errors for language processing systems. The contribution of this paper is a novel approach that resolves ambiguities in the input graph to perform fuzzy clustering. The method takes as an input synonymy relations between potentially ambiguous terms available in human-readable dictionaries and transforms them into a machine readable representation in the form of disambiguated synsets. Our method, called WATSET, is based on a new local-global meta-algorithm for fuzzy graph clustering. The underlying principle is to discover the word senses based on a local graph cluster1http://www.wikipedia.org 2http://www.wiktionary.org 3http://www.omegawiki.org 4https://dkpro.github.io/dkpro-jwktl 5https://github.com/componavt/wikokit 1579 ing, and then to induce synsets using global sense clustering. We show that our method outperforms other methods for synset induction. The induced resource eliminates the need in manual synset construction and can be used to build WordNet-like semantic networks for under-resourced languages. An implementation of our method along with induced lexical resources is available online.6 2 Related Work Methods based on resource linking surveyed by Gurevych et al. (2016) gather various existing lexical resources and perform their linking to obtain a machine-readable repository of lexical semantic knowledge. For instance, BabelNet (Navigli and Ponzetto, 2012) relies in its core on a linking of WordNet and Wikipedia. UBY (Gurevych et al., 2012) is a general-purpose specification for the representation of lexical-semantic resources and links between them. The main advantage of our approach compared to the lexical resources is that no manual synset encoding is required. Methods based on word sense induction try to induce sense representations without the need for any initial lexical resource by extracting semantic relations from text. In particular, word sense induction (WSI) based on word ego networks clusters graphs of semantically related words (Lin, 1998; Pantel and Lin, 2002; Dorow and Widdows, 2003; V´eronis, 2004; Hope and Keller, 2013; Pelevina et al., 2016; Panchenko et al., 2017a), where each cluster corresponds to a word sense. An ego network consists of a single node (ego) together with the nodes they are connected to (alters) and all the edges among those alters (Everett and Borgatti, 2005). In the case of WSI, such a network is a local neighborhood of one word. Nodes of the ego network are the words which are semantically similar to the target word. Such approaches are able to discover homonymous senses of words, e.g., “bank” as slope versus “bank” as organisation (Di Marco and Navigli, 2012). However, as the graphs are usually composed of semantically related words obtained using distributional methods (Baroni and Lenci, 2010; Biemann and Riedl, 2013), the resulting clusters by no means can be considered synsets. Namely, (1) they contain words related not only via synonymy relation, but via a mixture of relations such as synonymy, hypernymy, 6https://github.com/dustalov/watset co-hyponymy, antonymy, etc. (Heylen et al., 2008; Panchenko, 2011); (2) clusters are not unique, i.e., one word can occur in clusters of different ego networks referring to the same sense, while in WordNet a word sense occurs only in a single synset. In our synset induction method, we use word ego network clustering similarly as in word sense induction approaches, but apply them to a graph of semantically clean synonyms. Methods based on clustering of synonyms, such as our approach, induce the resource from an ambiguous graph of synonyms where edges a extracted from manually-created resources. According to the best of our knowledge, most experiments either employed graph-based word sense induction applied to text-derived graphs or relied on a linking-based method that already assumes availability of a WordNet-like resource. A notable exception is the ECO approach by Gonc¸alo Oliveira and Gomes (2014), which was applied to induce a WordNet of the Portuguese language called Onto.PT.7 We compare to this approach and to five other state-of-the-art graph clustering algorithms as the baselines. ECO (Gonc¸alo Oliveira and Gomes, 2014) is a fuzzy clustering algorithm that was used to induce synsets for a Portuguese WordNet from several available synonymy dictionaries. The algorithm starts by adding random noise to edge weights. Then, the approach applies Markov Clustering (see below) of this graph several times to estimate the probability of each word pair being in the same synset. Finally, candidate pairs over a certain threshold are added to output synsets. MaxMax (Hope and Keller, 2013) is a fuzzy clustering algorithm particularly designed for the word sense induction task. In a nutshell, pairs of nodes are grouped if they have a maximal mutual affinity. The algorithm starts by converting the undirected input graph into a directed graph by keeping the maximal affinity nodes of each node. Next, all nodes are marked as root nodes. Finally, for each root node, the following procedure is repeated: all transitive children of this root form a cluster and the root are marked as non-root nodes; a root node together with all its transitive children form a fuzzy cluster. Markov Clustering (MCL) (van Dongen, 2000) is a hard clustering algorithm for graphs based on simulation of stochastic flow in graphs. 7http://ontopt.dei.uc.pt 1580 Background Corpus Synonymy Dictionary Learning Word Embeddings Graph Construction Synsets Word Similarities Ambiguous Weighted Graph Local Clustering: Word Sense Induction Global Clustering: Synset Induction Sense Inventory Disambiguation of Neighbors Disambiguated Weighted Graph Local-Global Fuzzy Graph Clustering Figure 1: Outline of the WATSET method for synset induction. MCL simulates random walks within a graph by alternation of two operators called expansion and inflation, which recompute the class labels. Notably, it has been successfully used for the word sense induction task (Dorow and Widdows, 2003). Chinese Whispers (CW) (Biemann, 2006) is a hard clustering algorithm for weighted graphs that can be considered as a special case of MCL with a simplified class update step. At each iteration, the labels of all the nodes are updated according to the majority labels among the neighboring nodes. The algorithm has a meta-parameter that controls graph weights that can be set to three values: (1) top sums over the neighborhood’s classes; (2) nolog downgrades the influence of a neighboring node by its degree or by (3) log of its degree. Clique Percolation Method (CPM) (Palla et al., 2005) is a fuzzy clustering algorithm for unweighted graphs that builds up clusters from k-cliques corresponding to fully connected subgraphs of k nodes. While this method is only commonly used in social network analysis, we decided to add it to the comparison as synsets are essentially cliques of synonyms, which makes it natural to apply an algorithm based on clique detection. 3 The WATSET Method The goal of our method is to induce a set of unambiguous synsets by grouping individual ambiguous synonyms. An outline of the proposed approach is depicted in Figure 1. The method takes a dictionary of ambiguous synonymy relations and a text corpus as an input and outputs synsets. Note that the method can be used without a background corpus, yet as our experiments will show, corpusbased information improves the results when utilizing it for weighting the word graph’s edges. A synonymy dictionary can be perceived as a graph, where the nodes correspond to lexical entries (words) and the edges connect pairs of the nodes when the synonymy relation between them holds. The cliques in such a graph naturally form densely connected sets of synonyms corresponding to concepts (Gfeller et al., 2005). Given the fact that solving the clique problem exactly in a graph is NP-complete (Bomze et al., 1999) and that these graphs typically contain tens of thousands of nodes, it is reasonable to use efficient hard graph clustering algorithms, like MCL and CW, for finding a global segmentation of the graph. However, the hard clustering property of these algorithm does not handle polysemy: while one word could have several senses, it will be assigned to only one cluster. To deal with this limitation, a word sense induction procedure is used to induce senses for all words, one at the time, to produce a disambiguated version of the graph where a word is now represented with one or many word senses. The concept of a disambiguated graph is described in (Biemann, 2012). Finally, the disambiguated word sense graph is clustered globally to induce synsets, which are hard clusters of word senses. More specifically, the method consists of five steps presented in Figure 1: (1) learning word embeddings; (2) constructing the ambiguous weighted graph of synonyms G; (3) inducing the word senses; (4) constructing the disambiguated weighted graph G′ by disambiguating of neighbors with respect to the induced word senses; (5) global clustering of the graph G′. 3.1 Learning Word Embeddings Since different graph clustering algorithms are sensitive to edge weighting, we consider distributional semantic similarity based on word embeddings as a possible edge weighting approach for our synonymy graph. As we show further, this approach improves over unweighted versions and yields the best overall results. 3.2 Construction of a Synonymy Graph We construct the synonymy graph G = (V, E) as follows. The set of nodes V includes every lexeme appearing in the input synonymy dictionaries. The set of undirected edges E is composed of all edges 1581 Figure 2: Disambiguation of an ambiguous input graph using local clustering (WSI) to facilitate global clustering of words into synsets. (u, v) ∈V × V retrieved from one of the input synonymy dictionaries. We consider three edge weight representations: • ones that assigns every edge the constant weight of 1; • count that weights the edge (u, v) as the number of times the synonymy pair appeared in the input dictionaries; • sim that assigns every edge (u, v) a weight equal to the cosine similarity of skip-gram word vectors (Mikolov et al., 2013). As the graph G is likely to have polysemous words, the goal is to separate individual word senses using graph-based word sense induction. 3.3 Local Clustering: Word Sense Induction In order to facilitate global fuzzy clustering of the graph, we perform disambiguation of its ambiguous nodes as illustrated in Figure 2. First, we use a graph-based word sense induction method that is similar to the curvature-based approach of Dorow and Widdows (2003). In particular, removal of the nodes participating in many triangles tends to separate the original graph into several connected components. Thus, given a word u, we extract a network of its nearest neighbors from the synonymy graph G. Then, we remove the original word u from this network and run a hard graph clustering algorithm that assigns one node to one and only one cluster. In our experiments, we test Chinese Whispers and Markov Clustering. The expected result of this is that each cluster represents a different sense of the word u, e.g.: bank1 {streambank, riverbank, ...} bank2 {bank company, ...} bank3 {bank building, building, ...} bank4 {coin bank, penny bank, ...} We denote, e.g., bank1, bank2 and other items as word senses referred to as senses(bank). We denote as ctx(s) a cluster corresponding to the word sense s. Note that the context words have no sense labels. They are recovered by the disambiguation approach described next. 3.4 Disambiguation of Neighbors Next, we disambiguate the neighbors of each induced sense. The previous step results in splitting word nodes into (one or more) sense nodes. However, nearest neighbors of each sense node are still ambiguous, e.g., (bank3, building?). To recover these sense labels of the neighboring words, we employ the following sense disambiguation approach proposed by Faralli et al. (2016). For each word u in the context ctx(s) of the sense s, we find the most similar sense of that word ˆu to the context. We use the cosine similarity measure between the context of the sense s and the context of each candidate sense u′ of the word u: ˆu = arg max u′∈senses(u) cos(ctx(s), ctx(u′)). A context ctx(·) is represented by a sparse vector in a vector space of all ambiguous words of all contexts. The result is a disambiguated context c ctx(s) in a space of disambiguated words derived from its ambiguous version ctx(s): c ctx(s) = {ˆu : u ∈ctx(s)}. 3.5 Global Clustering: Synset Induction Finally, we construct the word sense graph G′ = (V ′, E′) using the disambiguated senses instead of the original words and establishing the edges between these disambiguated senses: V ′ = [ u∈V senses(u), E′ = [ s∈V ′ {s} × c ctx(s). 1582 Running a hard clustering algorithm on G′ produces the desired set of synsets as our final result. Figure 2 illustrates the process of disambiguation of an input ambiguous graph on the example of the word “bank”. As one may observe, disambiguation of the nearest neighbors is a necessity to be able to construct a global version of the senseaware graph. Note that current approaches to WSI, e.g., (V´eronis, 2004; Biemann, 2006; Hope and Keller, 2013), do not perform this step, but perform only local clustering of the graph since they do not aim at a global representation of synsets. 3.6 Local-Global Fuzzy Graph Clustering While we use our approach to synset induction in this work, the core of our method is the “localglobal” fuzzy graph clustering algorithm, which can be applied to arbitrary graphs (see Figure 1). This method, summarized in Algorithm 1, takes an undirected graph G = (V, E) as the input and outputs a set of fuzzy clusters of its nodes V . This is a meta-algorithm as it operates on top of two hard clustering algorithms denoted as Clusterlocal and Clusterglobal, such as CW or MCL. At the first phase of the algorithm, for each node its senses are induced via ego network clustering (lines 1– 7). Next, the disambiguation of each ego network is performed (lines 8–15). Finally, the fuzzy clusters are obtained by applying the hard clustering algorithm to the disambiguated graph (line 16). As a post-processing step, the sense labels can be removed to make the cluster elements subsets of V . 4 Evaluation We conduct our experiments on resources from two different languages. We evaluate our approach on two datasets for English to demonstrate its performance on a resource-rich language. Additionally, we evaluate it on two Russian datasets since Russian is a good example of an under-resourced language with a clear need for synset induction. 4.1 Gold Standard Datasets For each language, we used two differently constructed lexical semantic resources listed in Table 1 to obtain gold standard synsets. English. We use WordNet8, a popular English lexical database constructed by expert lexicographers. WordNet contains general vocabulary and 8https://wordnet.princeton.edu Algorithm 1 WATSET fuzzy graph clustering Input: a set of nodes V and a set of edges E. Output: a set of fuzzy clusters of V . 1: for all u ∈V do 2: C ←Clusterlocal(Ego(u)) // C = {C1, ...} 3: for i ←1 . . . |C| do 4: ctx(ui) ←Ci 5: senses(u) ←senses(u) ∪{ui} 6: end for 7: end for 8: V ′ ←S u∈V senses(u) 9: for all s ∈V ′ do 10: for all u ∈ctx(s) do 11: ˆu ←arg max u′∈senses(u) cos(ctx(s), ctx(u′)) 12: end for 13: c ctx(s) ←{ˆu : u ∈ctx(s)} 14: end for 15: E′ ←S s∈V ′{s} × c ctx(s) 16: return Clusterglobal(V ′, E′) appears to be de facto gold standard in similar tasks (Hope and Keller, 2013). We used WordNet 3.1 to derive the synonymy pairs from synsets. Additionally, we use BabelNet9, a large-scale multilingual semantic network constructed automatically using WordNet, Wikipedia and other resources. We retrieved all the synonymy pairs from the BabelNet 3.7 synsets marked as English. Russian. As a lexical ontology for Russian, we use RuWordNet10 (Loukachevitch et al., 2016), containing both general vocabulary and domainspecific synsets related to sport, finance, economics, etc. Up to a half of the words in this resource are multi-word expressions (Kiselev et al., 2015), which is due to the coverage of domainspecific vocabulary. RuWordNet is a WordNetlike version of the RuThes thesaurus that is constructed in the traditional way, namely by a small group of expert lexicographers (Loukachevitch, 2011). In addition, we use Yet Another RussNet11 (YARN) by Braslavski et al. (2016) as another gold standard for Russian. The resource is constructed using crowdsourcing and mostly covers general vocabulary. Particularly, non-expert users are allowed to edit synsets in a collaborative way loosely supervised by a team of project curators. Due to the ongoing development of the re9http://www.babelnet.org 10http://ruwordnet.ru/en 11https://russianword.net/en 1583 source, we selected as the gold standard only those synsets that were edited at least eight times in order to filter out noisy incomplete synsets. Resource # words # synsets # synonyms WordNet En 148 730 117 659 152 254 BabelNet En 11 710 137 6 667 855 28 822 400 RuWordNet Ru 110 242 49 492 278 381 YARN Ru 9 141 2 210 48 291 Table 1: Statistics of the gold standard datasets. 4.2 Evaluation Metrics To evaluate the quality of the induced synsets, we transformed them into binary synonymy relations and computed precision, recall, and F-score on the basis of the overlap of these binary relations with the binary relations from the gold standard datasets. Given a synset containing n words, we generate a set of n(n−1) 2 pairs of synonyms. The F-score calculated this way is known as Paired F-score (Manandhar et al., 2010; Hope and Keller, 2013). The advantage of this measure compared to other cluster evaluation measures, such as Fuzzy B-Cubed (Jurgens and Klapaftis, 2013), is its straightforward interpretability. 4.3 Word Embeddings English. We use the standard 300-dimensional word embeddings trained on the 100 billion tokens Google News corpus (Mikolov et al., 2013).12 Russian. We use the 500-dimensional word embeddings trained using the skip-gram model with negative sampling (Mikolov et al., 2013) using a context window size of 10 with the minimal word frequency of 5 on a 12.9 billion tokens corpus of books. These embeddings were shown to produce state-of-the-art results in the RUSSE shared task13 and are part of the Russian Distributional Thesaurus (RDT) (Panchenko et al., 2017b).14 4.4 Input Dictionary of Synonyms For each language, we constructed a synonymy graph using openly available language resources. The statistics of the graphs used as the input in the further experiments are shown in Table 2. 12https://code.google.com/p/word2vec 13http://www.dialog-21.ru/en/ evaluation/2015/semantic_similarity 14http://russe.nlpub.ru/downloads English. Synonyms were extracted from the English Wiktionary15, which is the largest Wiktionary at the present moment in terms of the lexical coverage, using the DKPro JWKTL tool by Zesch et al. (2008). English words have been extracted from the dump. Russian. Synonyms from three sources were combined to improve lexical coverage of the input dictionary and to enforce confidence in jointly observed synonyms: (1) synonyms listed in the Russian Wiktionary extracted using the Wikokit tool by Krizhanovsky and Smirnov (2013); (2) the dictionary of Abramov (1999); and (3) the Universal Dictionary of Concepts (Dikonov, 2013). While the two latter resources are specific to Russian, Wiktionary is available for most languages. Note that the same input synonymy dictionaries were used by authors of YARN to construct synsets using crowdsourcing. The results on the YARN dataset show how close an automatic synset induction method can approximate manually created synsets provided the same starting material.16 Language # words # synonyms English 243 840 212 163 Russian 83 092 211 986 Table 2: Statistics of the input datasets. 5 Results We compare WATSET with five state-of-the art graph clustering methods presented in Section 2: Chinese Whispers (CW), Markov Clustering (MCL), MaxMax, ECO clustering, and the clique percolation method (CPM). The first two algorithms perform hard clustering, while the last three are fuzzy clustering methods just like our method. While the hard clustering algorithms are able to discover clusters which correspond to synsets composed of unambigous words, they can produce wrong results in the presence of lexical ambiguity (one node belongs to several synsets). In our experiments, we rely on our own implementation of MaxMax and ECO as reference implementations are not available. For CW17, MCL18 15We used the Wiktionary dumps of February 1, 2017. 16We used the YARN dumps of February 7, 2017. 17https://www.github.com/uhh-lt/ chinese-whispers 18http://java-ml.sourceforge.net 1584 CW MCL MaxMax ECO CPM Watset 0.0 0.1 0.2 0.3 WordNet (English) F−score CW MCL MaxMax ECO CPM Watset 0.00 0.05 0.10 0.15 0.20 RuWordNet (Russian) F−score CW MCL MaxMax ECO CPM Watset 0.0 0.1 0.2 0.3 BabelNet (English) F−score CW MCL MaxMax ECO CPM Watset 0.0 0.1 0.2 0.3 0.4 YARN (Russian) F−score Figure 3: Impact of the different graph weighting schemas on the performance of synset induction: ones, count, sim. Each bar corresponds to the top performance of a method in Tables 3 and 4. and CPM19, available implementations have been used. During the evaluation, we delete clusters equal or larger than the threshold of 150 words as they hardly can represent any meaningful synset. The notation WATSET[MCL, CWtop] means using MCL for local clustering and Chinese Whispers in the top mode for global clustering. 5.1 Impact of Graph Weighting Schema Figure 3 presents an overview of the evaluation results on both datasets. The first step, common for all of the tested synset induction methods, is graph construction. Thus, we started with an analysis of three ways to weight edges of the graph introduced in Section 3.2: binary scores (ones), frequencies (count), and semantic similarity scores (sim) based on word vector similarity. Results across various configurations and methods indicate that using the weights based on the similarity scores provided by word embeddings is the best strategy for all methods except MaxMax on the English datasets. However, its performance using the ones weighting does not exceed the other methods using the sim weighting. Therefore, we report all further results on the basis of the sim weights. The edge weighting scheme impacts Russian more for most algorithms. The CW algorithm however remains sensitive to the weighting also for the English dataset due to its randomized nature. 19https://networkx.github.io 5.2 Comparative Analysis Table 3 and 4 present evaluation results for both languages. For each method, we show the best configurations in terms of F-score. One may note that the granularity of the resulting synsets, especially for Russian, is very different, ranging from 4 000 synsets for the CPMk=3 method to 67 645 induced by the ECO method. Both tables report the number of words, synsets and synonyms after pruning huge clusters larger than 150 words. Without this pruning, the MaxMax and CPM methods tend to discover giant components obtaining almost zero precision as we generate all possible pairs of nodes in such clusters. The other methods did not show such behavior. WATSET robustly outperforms all other methods according to F-score on both English datasets (Table 3) and on the YARN dataset for Russian (Table 4). Also, it outperforms all other methods according to recall on both Russian datasets. The disambiguation of the input graph performed by the WATSET method splits nodes belonging to several local communities to several nodes, significantly facilitating the clustering task otherwise complicated by the presence of the hubs that wrongly link semantically unrelated nodes. Interestingly, in all the cases, the toughest competitor was a hard clustering algorithm—MCL (van Dongen, 2000). We observed that the “plain” MCL successfully groups monosemous words, but 1585 WordNet BabelNet Method # words # synsets # synonyms P R F1 P R F1 WATSET[MCL, MCL] 243 840 112 267 345 883 0.345 0.308 0.325 0.400 0.301 0.343 MCL 243 840 84 679 387 315 0.342 0.291 0.314 0.390 0.300 0.339 WATSET[MCL, CWlog] 243 840 105 631 431 085 0.314 0.325 0.319 0.359 0.312 0.334 CWtop 243 840 77 879 539 753 0.285 0.317 0.300 0.326 0.317 0.321 WATSET[CWlog, MCL] 243 840 164 689 227 906 0.394 0.280 0.327 0.439 0.245 0.314 WATSET[CWlog, CWlog] 243 840 164 667 228 523 0.392 0.280 0.327 0.439 0.245 0.314 CPMk=2 186 896 67 109 317 293 0.561 0.141 0.225 0.492 0.214 0.299 MaxMax 219 892 73 929 797 743 0.176 0.300 0.222 0.202 0.313 0.245 ECO 243 840 171 773 84 372 0.784 0.069 0.128 0.699 0.096 0.169 Table 3: Comparison of the synset induction methods on datasets for English. All methods rely on the similarity edge weighting (sim); best configurations of each method in terms of F-scores are shown for each dataset. Results are sorted by F-score on BabelNet, top three values of each metric are boldfaced. RuWordNet YARN Method # words # synsets # synonyms P R F1 P R F1 WATSET[CWnolog, MCL] 83 092 55 369 332 727 0.120 0.349 0.178 0.402 0.463 0.430 WATSET[MCL, MCL] 83 092 36 217 403 068 0.111 0.341 0.168 0.405 0.455 0.428 WATSET[CWtop, CWlog] 83 092 55 319 341 043 0.116 0.351 0.174 0.386 0.474 0.425 MCL 83 092 21 973 353 848 0.155 0.291 0.203 0.550 0.340 0.420 WATSET[MCL, CWtop] 83 092 34 702 473 135 0.097 0.361 0.153 0.351 0.496 0.411 CWnolog 83 092 19 124 672 076 0.087 0.342 0.139 0.364 0.451 0.403 MaxMax 83 092 27 011 461 748 0.176 0.261 0.210 0.582 0.195 0.292 CPMk=3 15 555 4 000 45 231 0.234 0.072 0.111 0.626 0.060 0.110 ECO 83 092 67 645 18 362 0.724 0.034 0.066 0.904 0.002 0.004 Table 4: Results on Russian sorted by F-score on YARN, top three values of each metric are boldfaced. isolates the neighborhood of polysemous words, which results in the recall drop in comparison to WATSET. CW operates faster due to a simplified update step. On the same graph, CW tends to produce larger clusters than MCL. This leads to a higher recall of “plain” CW as compared to the “plain” MCL, at the cost of lower precision. Using MCL instead of CW for sense induction in WATSET expectedly produces more finegrained senses. However, at the global clustering step, these senses erroneously tend to form coarsegrained synsets connecting unrelated senses of the ambiguous words. This explains the generally higher recall of WATSET[MCL, ·]. Despite the randomized nature of CW, variance across runs do not affect the overall ranking: The rank of different versions of CW (log, nolog, top) can change, while the rank of the best CW configuration compared to other methods remains the same. The MaxMax algorithm shows mixed results. On the one hand, it outputs large clusters uniting more than hundred nodes. This inevitably leads to a high recall, as it is clearly seen in the results for Russian because such synsets still pass under our cluster size threshold of 150 words. Its synsets on English datasets are even larger and get pruned, which results in low recall. On the other hand, smaller synsets having at most 10–15 words were identified correctly. MaxMax appears to be extremely sensible to edge weighting, which also complicates its practical use. The CPM algorithm showed unsatisfactory results, emitting giant components encompassing thousands of words. Such clusters were automatically pruned, but the remaining clusters are relatively correctly built synsets, which is confirmed by the high values of precision. When increasing the minimal number of elements in the clique k, recall improves, but at the cost of a dramatic precision drop. We suppose that the network structure assumptions exploited by CPM do not accurately model the structure of our synonymy graphs. Finally, the ECO method yielded the worst results because the most cluster candidates failed to pass through the constant threshold used for estimating whether a pair of words should be included in the same cluster. Most synsets produced by this method were trivial, i.e., containing only a single 1586 Resource P R F1 BabelNet on WordNet En 0.729 0.998 0.843 WordNet on BabelNet En 0.998 0.699 0.822 YARN on RuWordNet Ru 0.164 0.162 0.163 BabelNet on RuWordNet Ru 0.348 0.409 0.376 RuWordNet on YARN Ru 0.670 0.121 0.205 BabelNet on YARN Ru 0.515 0.109 0.180 Table 5: Performance of lexical resources crossevaluated against each other. word. The remaining synsets for both languages have at most three words that have been connected by a chance due to the edge noising procedure used in this method resulting in low recall. 6 Discussion On the absolute scores. The results obtained on all gold standards (Figure 3) show similar trends in terms of relative ranking of the methods. Yet absolute scores of YARN and RuWordNet are substantially different due to the inherent difference of these datasets. RuWordNet is more domainspecific in terms of vocabulary, so our input set of generic synonymy dictionaries has a limited coverage on this dataset. On the other hand, recall calculated on YARN is substantially higher as this resource was manually built on the basis of synonymy dictionaries used in our experiments. The reason for low absolute numbers in evaluations is due to an inherent vocabulary mismatch between the input dictionaries of synonyms and the gold datasets. To validate this hypothesis, we performed a cross-resource evaluation presented in Table 5. The low performance of the crossevaluation of the two resources supports the hypothesis: no single resource for Russian can obtain high recall scores on another one. Surprisingly, even BabelNet, which integrates most of available lexical resources, still does not reach a recall substantially larger than 0.5.20 Note that the results of this cross-dataset evaluation are not directly comparable to results in Table 4 since in our experiments we use much smaller input dictionaries than those used by BabelNet. On sparseness of the input dictionary. Table 6 presents some examples of the obtained synsets of various sizes for the top WATSET configuration on both languages. As one might observe, the qual20We used BabelNet 3.7 extracting all 3 497 327 synsets that were marked as Russian. ity of the results is highly plausible. However, one limitation of all approaches considered in this paper is the dependence on the completeness of the input dictionary of synonyms. In some parts of the input synonymy graph, important bridges between words can be missing, leading to smallerthan-desired synsets. A promising extension of the present methodology is using distributional models to enhance connectivity of the graph by cautiously adding extra relations. Size Synset 2 {decimal point, dot} 3 {gullet, throat, food pipe} 4 {microwave meal, ready meal, TV dinner, frozen dinner} 5 {objective case, accusative case, oblique case, object case, accusative} 6 {radio theater, dramatized audiobook, audio theater, radio play, radio drama, audio play} Table 6: Sample synsets induced by the WATSET[MCL, MCL] method for English. 7 Conclusion We presented a new robust approach to fuzzy graph clustering that relies on hard graph clustering. Using ego network clustering, the nodes belonging to several local communities are split into several nodes each belonging to one community. The transformed “disambiguated” graph is then clustered using an efficient hard graph clustering algorithm, obtaining a fuzzy clustering as the result. The disambiguated graph facilitates clustering as it contains fewer hubs connecting unrelated nodes from different communities. We apply this meta clustering algorithm to the task of synset induction on two languages, obtaining the best results on three datasets and competitive results on one dataset in terms of F-score as compared to five state-of-the-art graph clustering methods. Acknowledgments We acknowledge the support of the Deutsche Forschungsgemeinschaft (DFG) foundation under the “JOIN-T” project, the DAAD, the RFBR under the project no. 16-37-00354 mol a, and the RFH under the project no. 16-04-12019. We also thank three anonymous reviewers for their helpful comments, Andrew Krizhanovsky for providing a parsed Wiktionary, Natalia Loukachevitch for the provided RuWordNet dataset, and Denis Shirgin who suggested the WATSET name. 1587 References Nikolay Abramov. 1999. The dictionary of Russian synonyms and semantically related expressions [Slovar’ russkikh sinonimov i skhodnykh po smyslu vyrazhenii]. Russian Dictionaries [Russkie slovari], Moscow, Russia, 7th edition. In Russian. Marco Baroni and Alessandro Lenci. 2010. Distributional Memory: A General Framework for Corpus-based Semantics. Computational Linguistics 36(4):673–721. https://doi.org/10.1162/coli a 00016. Chris Biemann. 2006. Chinese Whispers: An Efficient Graph Clustering Algorithm and Its Application to Natural Language Processing Problems. In Proceedings of the First Workshop on Graph Based Methods for Natural Language Processing. Association for Computational Linguistics, New York City, NY, USA, TextGraphs-1, pages 73–80. http://dl.acm.org/citation.cfm?id=1654774. Chris Biemann. 2012. Structure Discovery in Natural Language. Theory and Applications of Natural Language Processing. Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-25923-4. Chris Biemann and Martin Riedl. 2013. Text: now in 2D! A framework for lexical expansion with contextual similarity. Journal of Language Modelling 1(1):55–95. https://doi.org/10.15398/jlm.v1i1.60. Immanuel M. Bomze, Marco Budinich, Panos M. Pardalos, and Marcello Pelillo. 1999. The maximum clique problem. In Handbook of Combinatorial Optimization, Springer US, pages 1–74. https://doi.org/10.1007/978-1-4757-3023-4 1. Pavel Braslavski, Dmitry Ustalov, Mukhin Mukhin, and Yuri Kiselev. 2016. YARN: Spinning-inProgress. In Proceedings of the 8th Global WordNet Conference. Global WordNet Association, Bucharest, Romania, GWC 2016, pages 58–65. http://gwc2016.racai.ro/procedings.pdf. Antonio Di Marco and Roberto Navigli. 2012. Clustering and Diversifying Web Search Results with Graph-Based Word Sense Induction. Computational Linguistics 39(3):709–754. https://doi.org/10.1162/COLI a 00148. Vyachelav G. Dikonov. 2013. Development of lexical basis for the Universal Dictionary of UNL Concepts. In Computational Linguistics and Intellectual Technologies: Papers from the Annual International Conference “Dialogue”. RGGU, Moscow, volume 12 (19), pages 212–221. http://www.dialog21.ru/media/1238/dikonovv.pdf. Beate Dorow and Dominic Widdows. 2003. Discovering Corpus-Specific Word Senses. In Proceedings of the Tenth Conference on European Chapter of the Association for Computational Linguistics Volume 2. Association for Computational Linguistics, Budapest, Hungary, EACL ’03, pages 79–82. https://doi.org/10.3115/1067737.1067753. Martin Everett and Stephen P. Borgatti. 2005. Ego network betweenness. Social Networks 27(1):31–38. https://doi.org/10.1016/j.socnet.2004.11.007. Stefano Faralli, Alexander Panchenko, Chris Biemann, and Simone P. Ponzetto. 2016. Linked Disambiguated Distributional Semantic Networks. In The Semantic Web – ISWC 2016: 15th International Semantic Web Conference, Kobe, Japan, October 17–21, 2016, Proceedings, Part II. Springer International Publishing, Cham, pages 56–64. https://doi.org/10.1007/978-3-319-46547-0 7. David Gfeller, Jean-C´edric Chappelier, and Paulo De Los Rios. 2005. Synonym Dictionary Improvement through Markov Clustering and Clustering Stability. In Proceedings of the International Symposium on Applied Stochastic Models and Data Analysis. pages 106–113. https://conferences.telecombretagne.eu/asmda2005/IMG/pdf/proceedings/ 106.pdf. Hugo Gonc¸alo Oliveira and Paolo Gomes. 2014. ECO and Onto.PT: a flexible approach for creating a Portuguese wordnet automatically. Language Resources and Evaluation 48(2):373–393. https://doi.org/10.1007/s10579-013-9249-9. Zhiguo Gong, Chan Wa Cheang, and U. Leong Hou. 2005. Web Query Expansion by WordNet. In Proceedings of the 16th International Conference on Database and Expert Systems Applications - DEXA ’05, Springer Berlin Heidelberg, Copenhagen, Denmark, pages 166–175. https://doi.org/10.1007/11546924 17. Iryna Gurevych, Judith Eckle-Kohler, Silvana Hartmann, Michael Matuschek, Christian M. Meyer, and Christian Wirth. 2012. UBY – A Large-Scale Unified Lexical-Semantic Resource Based on LMF. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, Avignon, France, EACL ’12, pages 580–590. http://www.aclweb.org/anthology/E12-1059. Iryna Gurevych, Judith Eckle-Kohler, and Michael Matuschek. 2016. Linked Lexical Knowledge Bases: Foundations and Applications. Synthesis Lectures on Human Language Technologies. Morgan & Claypool Publishers. Kris Heylen, Yves Peirsman, Dirk Geeraerts, and Dirk Speelman. 2008. Modelling Word Similarity: an Evaluation of Automatic Synonymy Extraction Algorithms. In Proceedings of the Sixth International Conference on Language Resources and Evaluation. European Language Resources Association, Marrakech, Morocco, LREC 2008, pages 3243–3249. http://www.lrecconf.org/proceedings/lrec2008/pdf/818 paper.pdf. David Hope and Bill Keller. 2013. MaxMax: A GraphBased Soft Clustering Algorithm Applied to Word Sense Induction. In Computational Linguistics 1588 and Intelligent Text Processing: 14th International Conference, CICLing 2013, Samos, Greece, March 24-30, 2013, Proceedings, Part I, Springer Berlin Heidelberg, Berlin, Heidelberg, pages 368–381. https://doi.org/10.1007/978-3-642-37247-6 30. David Jurgens and Ioannis Klapaftis. 2013. SemEval2013 Task 13: Word Sense Induction for Graded and Non-Graded Senses. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013). Association for Computational Linguistics, Atlanta, GA, USA, pages 290–299. http://www.aclweb.org/anthology/S13-2049. Yuri Kiselev, Sergey V. Porshnev, and Mikhail Mukhin. 2015. Current Status of Russian Electronic Thesauri: Quality, Completeness and Availability [Sovremennoe sostoyanie elektronnykh tezaurusov russkogo yazyka: kachestvo, polnota i dostupnost’]. Programmnaya Ingeneria 6:34–40. In Russian. http://novtex.ru/prin/full/06 2015.pdf. Andrew A. Krizhanovsky and Alexander V. Smirnov. 2013. An approach to automated construction of a general-purpose lexical ontology based on Wiktionary. Journal of Computer and Systems Sciences International 52(2):215–225. https://doi.org/10.1134/S1064230713020068. Cody Kwok, Oren Etzioni, and Daniel S. Weld. 2001. Scaling Question Answering to the Web. ACM Transactions on Information Systems 19(3):242– 262. https://doi.org/10.1145/502115.502117. Dekang Lin. 1998. An Information-Theoretic Definition of Similarity. In Proceedings of the Fifteenth International Conference on Machine Learning. Morgan Kaufmann Publishers Inc., Madison, WI, USA, ICML ’98, pages 296–304. http://citeseerx.ist.psu.edu/viewdoc/ download?doi=10.1.1.55.1832&rep=rep1&type=pdf. Natalia Loukachevitch. 2011. Thesauri in information retrieval tasks [Tezaurusy v zadachakh informatsionnogo poiska]. Moscow University Press [Izd-vo MGU], Moscow, Russia. In Russian. Natalia V. Loukachevitch, German Lashevich, Anastasia A. Gerasimova, Vladimir V. Ivanov, and Boris V. Dobrov. 2016. Creating Russian WordNet by Conversion. In Computational Linguistics and Intellectual Technologies: papers from the Annual conference “Dialogue”. RSUH, Moscow, Russia, pages 405–415. http://www.dialog21.ru/media/3409/loukachevitchnvetal.pdf. Suresh Manandhar, Ioannis Klapaftis, Dmitriy Dligach, and Sameer Pradhan. 2010. SemEval-2010 Task 14: Word Sense Induction & Disambiguation. In Proceedings of the 5th International Workshop on Semantic Evaluation. Association for Computational Linguistics, Uppsala, Sweden, pages 63–68. http://www.aclweb.org/anthology/S10-1011. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeffrey Dean. 2013. Distributed Representations of Words and Phrases and their Compositionality. In Advances in Neural Information Processing Systems 26, Curran Associates, Inc., Harrahs and Harveys, NV, USA, pages 3111–3119. https://papers.nips.cc/paper/5021distributed-representations-of-words-and-phrasesand-their-compositionality.pdf. George A. Miller. 1995. WordNet: A Lexical Database for English. Communications of the ACM 38(11):39–41. https://doi.org/10.1145/219717.219748. Roberto Navigli and Simone P. Ponzetto. 2012. BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network. Artificial Intelligence 193:217–250. https://doi.org/10.1016/j.artint.2012.07.001. Gergely Palla, Imre Derenyi, Illes Farkas, and Tamas Vicsek. 2005. Uncovering the overlapping community structure of complex networks in nature and society. Nature 435:814–818. https://doi.org/10.1038/nature03607. Alexander Panchenko. 2011. Comparison of the Baseline Knowledge-, Corpus-, and Web-based Similarity Measures for Semantic Relations Extraction. In Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics. Association for Computational Linguistics, Edinburgh, UK, pages 11–21. http://www.aclweb.org/anthology/W11-2502. Alexander Panchenko, Eugen Ruppert, Stefano Faralli, Simone P. Ponzetto, and Chris Biemann. 2017a. Unsupervised Does Not Mean Uninterpretable: The Case for Word Sense Induction and Disambiguation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers. Association for Computational Linguistics, Valencia, Spain, pages 86–98. http://www.aclweb.org/anthology/E17-1009. Alexander Panchenko, Dmitry Ustalov, Nikolay Arefyev, Denis Paperno, Natalia Konstantinova, Natalia Loukachevitch, and Chris Biemann. 2017b. Human and Machine Judgements for Russian Semantic Relatedness. In Analysis of Images, Social Networks and Texts: 5th International Conference, AIST 2016, Yekaterinburg, Russia, April 7-9, 2016, Revised Selected Papers. Springer International Publishing, Yekaterinburg, Russia, pages 221–235. https://doi.org/10.1007/978-3-319-52920-2 21. Patrick Pantel and Dekang Lin. 2002. Discovering Word Senses from Text. In Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, Edmonton, Alberta, Canada, KDD ’02, pages 613–619. https://doi.org/10.1145/775047.775138. 1589 Maria Pelevina, Nikolay Arefiev, Chris Biemann, and Alexander Panchenko. 2016. Making Sense of Word Embeddings. In Proceedings of the 1st Workshop on Representation Learning for NLP. Association for Computational Linguistics, Berlin, Germany, pages 174–183. http://anthology.aclweb.org/W16-1620. Stijn van Dongen. 2000. Graph Clustering by Flow Simulation. Ph.D. thesis, University of Utrecht. Jean V´eronis. 2004. HyperLex: lexical cartography for information retrieval. Computer Speech & Language 18(3):223–252. https://doi.org/10.1016/j.csl.2004.05.002. Torsten Zesch, Christof M¨uller, and Iryna Gurevych. 2008. Extracting Lexical Semantic Knowledge from Wikipedia and Wiktionary. In Proceedings of the 6th International Conference on Language Resources and Evaluation. European Language Resources Association, Marrakech, Morocco, pages 1646–1652. http://www.lrecconf.org/proceedings/lrec2008/pdf/420 paper.pdf. Guangyou Zhou, Yang Liu, Fang Liu, Daojian Zeng, and Jun Zhao. 2013. Improving Question Retrieval in Community Question Answering Using World Knowledge. In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence. AAAI Press, Beijing, China, IJCAI ’13, pages 2239–2245. https://www.aaai.org/ocs/index.php/IJCAI/IJCAI13/ paper/download/6581/7029. 1590
2017
145
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1591–1600 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1146 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1591–1600 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1146 Neural Modeling of Multi-Predicate Interactions for Japanese Predicate Argument Structure Analysis Hiroki Ouchi1,2 Hiroyuki Shindo1,2 Yuji Matsumoto1,2 1 Nara Institute of Science and Technology 2 RIKEN Center for Advanced Intelligence Project (AIP) { ouchi.hiroki.nt6, shindo, matsu }@is.naist.jp Abstract The performance of Japanese predicate argument structure (PAS) analysis has improved in recent years thanks to the joint modeling of interactions between multiple predicates. However, this approach relies heavily on syntactic information predicted by parsers, and suffers from error propagation. To remedy this problem, we introduce a model that uses grid-type recurrent neural networks. The proposed model automatically induces features sensitive to multi-predicate interactions from the word sequence information of a sentence. Experiments on the NAIST Text Corpus demonstrate that without syntactic information, our model outperforms previous syntax-dependent models. 1 Introduction Predicate argument structure (PAS) analysis is a basic semantic analysis task, in which systems are required to identify the semantic units of a sentence, such as who did what to whom. In prodrop languages such as Japanese, Chinese and Italian, arguments are often omitted in text, and such argument omission is regarded as one of the most problematic issues facing PAS analysis (Iida and Poesio, 2011; Sasano and Kurohashi, 2011; Hangyo et al., 2013). In response to the argument omission problem, in Japanese PAS analysis, a joint model of the interactions between multiple predicates has been gaining popularity and achieved the state-ofthe-art results (Ouchi et al., 2015; Shibata et al., 2016). This approach is based on the linguistic intuition that the predicates in a sentence are semantically related to each other, and capturing this relation can be useful for PAS analysis. In the examFigure 1: Example of Japanese PAS. The upper edges denote dependency relations, and the lower edges denote case arguments. “NOM” and “ACC” denote the nominative and accusative arguments, respectively. “ϕi” is a zero pronoun, referring to the antecedent “男i (mani)”. ple sentence in Figure 1, the word “男i (mani)” is the accusative argument of the predicate “逮捕し た(arrested)” and is shared by the other predicate “逃走した(escaped)” as its nominative argument. Considering the semantic relation between “逮捕 した(arrested)” and “逃走した(escaped)”, we intuitively know that the person arrested by someone is likely to be the escaper. That is, information about one predicate-argument relation could help to identify another predicate-argument relation. However, to model such multi-predicate interactions, the joint approach in the previous studies relies heavily on syntactic information, such as part-of-speech (POS) tags and dependency relations predicted by POS taggers and syntactic parsers. Consequently, it suffers from error propagation caused by pipeline processing. To remedy this problem, we propose a neural model which automatically induces features sensitive to multi-predicate interactions exclusively from the word sequence information of a sentence. The proposed model takes as input all predicates and their argument candidates in a sentence at a time, and captures the interactions using gridtype recurrent neural networks (Grid-RNN) without syntactic information. 1591 Figure 2: Overview of neural models: (i) single-sequence and (ii) multi-sequence models. In this paper, we first introduce a basic model that uses RNNs. This model independently estimates the arguments of each predicate without considering multi-predicate interactions (Sec. 3). Then, extending this model, we propose a neural model that uses Grid-RNNs (Sec. 4). Performing experiments on the NAIST Text Corpus (Iida et al., 2007), we demonstrate that even without syntactic information, our neural models outperform previous syntax-dependent models (Imamura et al., 2009; Ouchi et al., 2015). In particular, the neural model using Grid-RNNs achieved the best result. This suggests that the proposed grid-type neural architecture effectively captures multi-predicate interactions and contributes to performance improvements. 1 2 Japanese Predicate Argument Structure Analysis 2.1 Task Description In Japanese PAS analysis, arguments are identified that each fulfills one of the three major case roles, nominative (NOM), accusative (ACC) and dative (DAT) cases, for each predicate. Arguments can be divided into the following three categories according to the positions relative to their predicates (Hayashibe et al., 2011; Ouchi et al., 2015): Dep: Arguments that have direct syntactic dependency on the predicate. Zero: Arguments referred to by zero pronouns within the same sentence that have no direct syntactic dependency on the predicate. Inter-Zero: Arguments referred to by zero pronouns outside of the same sentence. 1Our source code is publicly available at https://github.com/hiroki13/neural-pasa-system For example, in Figure 1, the nominative argument “警察(police)” for the predicate “逮捕した(arrested)” is regarded as a Dep argument, because the argument has a direct syntactic dependency on the predicate. By contrast, the nominative argument “男i (mani)” for the predicate “逃走し た(escaped)” is regarded as a Zero argument, because the argument has no direct syntactic dependency on the predicate. In this paper, we focus on the analysis for these intra-sentential arguments, i.e., Dep and Zero. In order to identify inter-sentential arguments (Inter-Zero), a much broader space must be searched (e.g., the whole document), resulting in a much more complicated analysis than intrasentential arguments.2 Owing to this complication, Ouchi et al. (2015) and Shibata et al. (2016) focused exclusively on intra-sentential argument analysis. Following this trend, we also restrict our focus to intra-sentential argument analysis. 2.2 Challenging Problem Arguments are often omitted in Japanese sentences. In Figure 1, ϕi represents the omitted argument, called the zero pronoun. This zero pronoun ϕi refers to “男i (mani)”. In Japanese PAS analysis, when an argument of the target predicate is omitted, we have to identify the antecedent of the omitted argument (i.e., the Zero argument). The analysis of such Zero arguments is much more difficult than that for Dep arguments, owing to the lack of direct syntactic dependencies. For Dep arguments, the syntactic dependency between an argument and its predicate is a strong clue. In the sentence in Figure 1, for the predi2The F-measure remains 10-20% (Taira et al., 2008; Imamura et al., 2009; Sasano and Kurohashi, 2011). 1592 Figure 3: Overall architecture of the singlesequence model. This model consists of three components: (i) Input Layer, (ii) RNN Layer and (iii) Output Layer. cate “逮捕した(arrested)”, the nominative argument is “警察(police)”. This argument is easily identified by relying on the syntactic dependency. By contrast, because the nominative argument “男 i (mani)” has no syntactic dependency on its predicate “逃走した(escaped)”, we must rely on other information to identify the zero argument. As a solution to this problem, we exploit two kinds of information: (i) the context of the entire sentence, and (ii) multi-predicate interactions. For the former, we introduce single-sequence model that induces context-sensitive representations from a sequence of argument candidates of a predicate. For the latter, we introduce multisequence model that induces predicate-sensitive representations from multiple sequences of argument candidates of all predicates in a sentence (shown in Figure 2). 3 Single-Sequence Model The single-sequence model exploits stacked bidirectional RNNs (Bi-RNN) (Schuster and Paliwal, 1997; Graves et al., 2005, 2013; Zhou and Xu, 2015). Figure 3 shows the overall architecture, which consists of the following three components: Input Layer: Map each word to a feature vector representation. RNN Layer: Produce high-level feature vectors using Bi-RNNs. Output Layer: Compute the probability of each case label for each word using the softmax function. Figure 4: Example of feature extraction. The underlined word is the target predicate. From the sentence “彼女はパンを食べた。(She ate bread.)”, three types of features are extracted for the target predicate “食べた(ate)”. Figure 5: Example of the process of creating a feature vector. The extracted features are mapped to each vector, and all the vectors are concatenated into one feature vector. In the following subsections, we describe each of these three components in detail. 3.1 Input Layer Given an input sentence w1:T = (w1, · · · , wT) and a predicate p, each word wt is mapped to a feature representation xt, which is the concatenation (⊕) of three types of vectors: xt = xarg t ⊕xpred t ⊕xmark t (1) where each vector is based on the following atomic features inspired by Zhou and Xu (2015): ARG: Word index of each word. PRED: Word index of the target predicate and the words around the predicate. MARK: Binary index that represents whether or not the word is the predicate. 1593 Figure 4 presents an example of the atomic features. For the ARG feature, we extract a word index xword ∈V for each word. Similarly, for the PRED feature, we extract each word index xword for the C words taking the target predicate at the center, where C denotes the window size. The MARK feature xmark ∈{0, 1} is a binary value that represents whether or not the word is the predicate. Then, using feature indices, we extract feature vector representations from each embedding matrix. Figure 5 shows the process of creating the feature vector x1 for the word w1 “彼女(she)”. We set two embedding matrices: (i) a word embedding matrix Eword ∈Rdword×|V|, and (ii) a mark embedding matrix Emark ∈Rdmark×2. From each embedding matrix, we extract the corresponding column vectors and concatenate them as a feature vector xt based on Eq. 1. Each feature vector xt is multiplied with a parameter matrix Wx: h(0) t = Wx xt (2) The vector h(0) t is given to the first RNN layer as input. 3.2 RNN Layer In the RNN layers, feature vectors are updated recurrently using Bi-RNNs. Bi-RNNs process an input sequence in a left-to-right manner for oddnumbered layers and in a right-to-left manner for even-numbered layers. By stacking these layers, we can construct the deeper network structures. Stacked Bi-RNNs consist of L layers, and the hidden state in the layer ℓ∈(1, · · · , L) is calculated as follows: h(ℓ) t = { g(ℓ)(h(ℓ−1) t , h(ℓ) t−1) (ℓ= odd) g(ℓ)(h(ℓ−1) t , h(ℓ) t+1) (ℓ= even) (3) Both of the odd- and even-numbered layers receive h(ℓ−1) t , the t-th hidden state of the ℓ−1 layer, as the first input of the function g(ℓ), which is an arbitrary function 3. For the second input of g(ℓ), odd-numbered layers receive h(ℓ) t−1, whereas evennumbered layers receive h(ℓ) t+1. By calculating the hidden states until the L-th layer, we obtain a hidden state sequence h(L) 1:T = (h(L) 1 , · · · , h(L) T ). Using each vector h(L) t , we calculate the probability of case labels for each word in the output layer. 3In this work, we used the Gated Recurrent Unit (GRU) (Cho et al., 2014) as the function g(ℓ). 3.3 Output Layer For the output layer, multi-class classification is performed using the softmax function: yt = softmax(Wy h(L) t ) where h(L) t denotes a vector representation propagated from the last RNN layer (Fig 3). Each element of yt is a probability value corresponding to each label. The label with the maximum probability among them is output as a result. In this work, we set five labels: NOM, ACC, DAT, PRED, null. PRED is the label for the predicate, and null denotes a word that does not fulfill any case role. 4 Multi-Sequence Model Whereas the single-sequence model assumes independence between predicates, the multi-sequence model assumes multi-predicate interactions. To capture such interactions between all predicates in a sentence, we extend the singlesequence model to the multi-sequence model using Grid-RNNs (Graves and Schmidhuber, 2009; Kalchbrenner et al., 2016). Figure 6 presents the overall architecture for the multi-sequence model, which consists of three components: Input Layer: Map words to M sequences of feature vectors for M predicates. Grid Layer: Update the hidden states over different sequences using Grid-RNNs. Output Layer: Compute the probability of each case label for each word using the softmax function. In the following subsections, we describe these three components in detail. 4.1 Input Layer The multi-sequence model takes as input a sentence w1:T = (w1, · · · , wT) and all predicates {pm}M 1 in the sentence. For each predicate pm, the input layer creates a sequence of feature vectors Xm = (xm,1, · · · , xm,T) by mapping each input word wt to a feature vector xm,t based on Eq 1. That is, for M predicates, M sequences of feature vectors {Xm}M 1 are created. Then, using Eq. 2, each feature vector xm,t is mapped to h(0) m,t, and a feature sequence is created for a predicate pm, i.e., H(0) m = (h(0) m,1, · · · , h(0) m,T). Consequently, for M predicates, we obtain M feature sequences {H(0) m }M 1 . 1594 Figure 6: Overall architecture of the multi-sequence model: an example of three sequences. 4.2 Grid Layer Inter-Sequence Connections For the grid layers, we use Grid-RNNs to propagate the feature information over the different sequences (inter-sequence connections). The figure on the right in Figure 6 shows the first grid layer. The hidden state is recurrently calculated from the upper-left (m = 1, t = 1) to the lowerright (m = M, t = T). Formally, in the ℓ-th layer, the hidden state h(ℓ) m,t is calculated as follows: h(ℓ) m,t= { g(ℓ)(h(ℓ−1) m,t ⊕h(ℓ) m−1,t, h(ℓ) m,t−1) (ℓ= odd) g(ℓ)(h(ℓ−1) m,t ⊕h(ℓ) m+1,t, h(ℓ) m,t+1) (ℓ= even) This equation is similar to Eq. 3. The main difference is that the hidden state of a neighboring sequence, h(ℓ) m−1,t (or h(ℓ) m+1,t), is concatenated (⊕) with the hidden state of the previous (ℓ−1) layer, h(ℓ−1) m,t , and is taken as input of the function g(ℓ). In the figure on the right in Figure 6, the blue curved lines represent the inter-sequence connections. Taking as input the hidden states of neighboring sequences, the network propagates feature information over multiple sequences (i.e., predicates). By calculating the hidden states until the L-th layer, we obtain M sequences of the hidden states, i.e., {H(L) m }M 1 , in which H(L) m = (h(L) m,1, · · · , h(L) m,T). Residual Connections As more layers are stacked, it becomes more difficult to learn the model parameters, owing to various challenges such as the vanishing gradient problem (Pascanu et al., 2013). In this work, we integrate residual connections (He et al., 2015; Wu et al., 2016) with our networks to form connections between layers. Specifically, the input vector h(ℓ−1) m,t of the ℓ-th layer is added to the output vector h(ℓ) m,t. Residual connections can also be applied to the single-sequence model. Thus, we can perform experiments on both models with/without residual connections. 4.3 Output Layer As with the single-sequence model, we use the softmax function to calculate the probability of the case labels of each word wt for each predicate pm: ym,t = softmax(Wy h(L) m,t) where h(L) m,t is a hidden state vector calculated in the last grid layer. 5 Related Work 5.1 Japanese PAS Analysis Approaches Existing approaches to Japanese PAS analysis are divided into two categories: (i) the pointwise approach and (ii) the joint approach. The pointwise approach involves estimating the score of each argument candidate for one predicate, and then selecting the argument candidate with the maximum score as an argument (Taira et al., 2008; Imamura et al., 2009; Hayashibe et al., 2011; Iida et al., 2016). The joint approach involves scoring all the predicateargument combinations in one sentence, and then selecting the combination with the highest score (Yoshikawa et al., 2011; Sasano and Kurohashi, 1595 2011; Ouchi et al., 2015; Shibata et al., 2016). Compared with the pointwise approach, the joint approach achieves better results. 5.2 Multi-Predicate Interactions Ouchi et al. (2015) reported that it is beneficial to Japanese PAS analysis to capture the interactions between all predicates in a sentence. This is based on the linguistic intuition that the predicates in a sentence are semantically related to each other, and that the information regarding this semantic relation can be useful for PAS analysis. Similarly, in semantic role labeling (SRL), Yang and Zong (2014) reported that their reranking model, which captures the multi-predicate interactions, is effective for the English constituentbased SRL task (Carreras and M`arquez, 2005). Taking this a step further, we propose a neural architecture that effectively models the multipredicate interactions. 5.3 Neural Approaches Japanese PAS In recent years, several attempts have been made to apply neural networks to Japanese PAS analysis (Shibata et al., 2016; Iida et al., 2016)4. In Shibata et al. (2016), a feed-forward neural network is used for the score calculation part of the joint model proposed by Ouchi et al. (2015). In Iida et al. (2016), multi-column convolutional neural networks are used for the zero anaphora resolution task. Both models exploit syntactic and selectional preference information as the atomic features of neural networks. Overall, the use of neural networks has resulted in advantageous performance levels, mitigating the cost of manually designing combination features. In this work, we demonstrate that even without such syntactic information, our neural models can realize comparable performance exclusively using the word sequence information of a sentence. English SRL Some neural models have achieved high performance without syntactic information in English SRL. Collobert et al. (2011) and Zhou and Xu (2015) worked on the English constituent-based 4These previous studies used unpublished datasets and evaluated the performance with different experimental settings. Consequently, we cannot compare their models with ours. SRL task (Carreras and M`arquez, 2005) using neural networks. In Collobert et al. (2011), their model exploited a convolutional neural network and achieved a 74.15% F-measure without syntactic information. In Zhou and Xu (2015), their model exploited bidirectional RNNs with linear-chain conditional random fields (CRFs) and achieved the state-of-the-art result, an 81.07% Fmeasure. Our models should be regarded as an extension of their model. The main differences between Zhou and Xu (2015) and our work are: (i) constituent-based vs dependency-based argument identification and (ii) the multi-predicate consideration. For the constituent-based SRL, Zhou and Xu (2015) used CRFs to capture the IOB label dependencies, because systems are required to identify the spans of arguments for each predicate. By contrast, for Japanese dependency-based PAS analysis, we replaced the CRFs with the softmax function, because in Japanese, arguments are rarely adjacent to each other.5 Furthermore, whereas the model described in Zhou and Xu (2015) predicts arguments for each predicate independently, our multisequence model jointly predicts arguments for all predicates in a sentence concurrently by considering the multi-predicate interactions. 6 Experiments 6.1 Experimental Settings Dataset We used the NAIST Text Corpus 1.5, which consists of 40,000 sentences from Japanese newspapers (Iida et al., 2007). For the experiments, we adopted standard data splits (Taira et al., 2008; Imamura et al., 2009; Ouchi et al., 2015): Train: Articles: Jan 1-11, Editorials: Jan-Aug Dev: Articles: Jan 12-13, Editorials: Sept Test: Articles: Jan 14-17, Editorials: Oct-Dec We used the word boundaries annotated in the NAIST Text Corpus and the target predicates that have at least one argument in the same sentence. We did not use any external resources. Learning We trained the model parameters by minimizing 5In our preliminary experiment, we could not confirm the performance improvement by CRFs. 1596 the cross-entropy loss function: L(θ) = − ∑ n ∑ t log P(yt|xt) + λ 2 ||θ||2 (4) where θ is a set of model parameters, and the hyper-parameter λ is the coefficient governing the L2 weight decay. Implementation Details We implemented our neural models using a deep learning library, Theano (Bastien et al., 2012). The number of epochs was set at 50, and we reported the result of the test set in the epoch with the best F-measure from the development set. The parameters were optimized using the stochastic gradient descent method (SGD) via a mini-batch, whose size was selected from {2, 4, 8}. The learning rate was automatically adjusted using Adam (Kingma and Ba, 2014). For the L2 weight decay, the hyper-parameter λ in Eq. 4 was selected from {0.001, 0.0005, 0.0001}. In the neural models, the number of the RNN and Grid layers were selected from {2, 4, 6, 8}. The window size C for the PRED feature (Sec. 3.1) was set at 5. Words with a frequency of 2 or more in the training set were mapped to each word index, and the remaining words were mapped to the unknown word index. The dimensions dword and dmark of the embeddings were set at 32. In the single-sequence model, the parameters of GRUs were set at 32 × 32. In the multi-sequence model, the parameters of GRUs related to the input values were set at 64 × 32, and the remaining were set at 32 × 32. The initial values of all parameters were sampled according to a uniform distribution from [− √ 6 √ row+col, √ 6 √ row+col], where row and col are the number of rows and columns of each matrix, respectively. Baseline Models We compared our models to existing models in previous works (Sec. 5.1) that use the NAIST Text Corpus 1.5. As a baseline for the pointwise approach, we used the pointwise model6 proposed in Imamura et al. (2009). In addition, as a baseline for the joint approach, we used the model proposed in Ouchi et al. (2015). These models exploit gold annotations in the NAIST Text Corpus as POS tags and dependency relations. 6We compared the results of the model reimplemented by Ouchi et al. (2015). Dep Zero All Imamura+ 09 85.06 41.65 78.15 Ouchi+ 15 86.07 44.09 79.23 Single-Seq 88.10 46.10 81.15 Multi-Seq 88.17 † 47.12 † 81.42 † Table 1: F-measures in the test set. SingleSeq is the single-sequence model, and Multi-Seq is the multi-sequence model. Imamura+ 09 is the model in Imamura et al. (2009) reimplemented by Ouchi et al. (2015), and Ouchi+ 15 is the ALL-Cases Joint Model in Ouchi et al. (2015). The mark † denotes the significantly better results with the significance level p < 0.05, comparing Single-Seq and Multi-Seq. 6.2 Results Neural Models vs Baseline Models Table 1 presents F-measures from our neural sequence models with eight RNN or Grid layers and the baseline models on the test set. For the significant test, we used the bootstrap resampling method. According to all metrics, both the single(Single-Seq) and multi-sequence models (MultiSeq) outperformed the baseline models. This confirms that our neural models realize high performance, even without syntactic information, by learning contextual information effective for PAS analysis from the word sequence of the sentence. In particular, for zero arguments (Zero), our models achieved a considerable improvement compared to the joint model in Ouchi et al. (2015). Specifically, the single-sequence model improved by approximately 2.0 points, and the multisequence model by approximately 3.0 points according to the F-measure. These results suggest that modeling the context of the entire sentence using RNNs are beneficial to Japanese PAS analysis, particularly to zero argument identification. Effects of Multiple Predicate Consideration As Table 1 shows, the multi-sequence model significantly outperformed the single-sequence model in terms of the F-measure overall (81.42% vs 81.15%). These results demonstrate that the grid-type neural architecture can effectively capture multi-predicate interactions by connecting the sequences of the argument candidates for all predicates in a sentence. Compared to the single-sequence model for dif1597 Single-Seq Multi-Seq L +res. −res. +res. −res. 2 Dep 87.34 87.10 87.43 87.73 Zero 47.98 47.90 47.66 46.93 All 80.62 80.24 80.71 80.68 4 Dep 87.27 87.41 87.60 87.09 Zero 50.43 50.83 48.10 48.58 All 80.92 80.99 80.99 80.59 6 Dep 87.73 87.11 88.04 87.39 Zero 48.81 49.51 48.98 48.91 All 81.05 80.63 81.19 80.68 8 Dep 87.98 87.23 87.65 87.07 Zero 47.40 48.38 49.34 48.23 All 81.31 80.33 81.33 80.40 Table 2: Performance comparison for different numbers of layers on the development set in Fmeasures. L is the number of the RNN or Grid layers. +res. or −res. indicates whether the model has residual connections (+) or not (−). ferent argument types, the multi-sequence model achieved slightly better results for direct dependency arguments (Dep) (88.10% vs 88.17%). In addition, for zero arguments (Zero), which have no syntactic dependency on their predicate, the multisequence model outperformed the single-sequence model by approximately 1.0 point according to the F-measure (46.10% vs 47.12%). This shows that capturing multi-predicate interactions is particularly effective for zero arguments, which is consistent with the results in Ouchi et al. (2015). Effects of Network Depth Table 2 presents F-measures from the neural sequence models with different network depths and with/without residual connections. The performance tends to improve as the RNN or Grid layers get deeper with residual connections. In particular, the two models with eight layers and residual connections achieved considerable improvements of approximately 1.0 point according to the F-measure compared to models without residual connections. This means that residual connections contribute to effective parameter learning of deeper models. Effects of the Number of Predicates Table 3 presents F-measures from the neural sequence models with different numbers of predicates in a sentence. In Table 3, M denotes how M Type No. Args Single-Seq Multi-Seq 1 Dep 2,733 89.97 89.66 Zero 154 47.62 53.54 All 2,887 88.08 88.01 2 Dep 5,674 89.64 90.11 Zero 836 53.87 54.21 All 6,510 85.39 85.95 3 Dep 6,067 87.72 88.06 Zero 1,357 49.98 51.82 All 7,424 81.43 82.11 4 Dep 4,616 87.80 87.84 Zero 1,205 47.27 48.50 All 5,821 80.31 80.69 5+ Dep 6.983 86.63 86.30 Zero 2,467 39.83 40.66 All 9,450 76.17 76.00 Table 3: Performance comparison for different numbers (M) of predicates in a sentence on the test set in F-measures. many predicates appear in a sentence. For example, the sentence in Figure 1 includes two predicates, “arrested” and “escaped”, and thus in this example M = 2. Overall, performance of both models gradually deteriorated as the number of predicates in a sentence increased, because sentences that contain many predicates are complex and difficult to analyze. However, compared to the singlesequence model, the multi-sequence model suppressed performance degradation, especially for zero arguments (Zero). By contrast, for direct dependency arguments (Dep), both models either achieved almost equivalent performance or the single-sequence model outperformed the multisequence model. A Detailed investigation of the relation between the number of predicates in a sentence and the complexity of PAS analysis is an interesting line for future work. Comparison per Case Role Table 4 shows F-measures for each case role. For reference, we show the results of the previous studies using the NAIST Text Corpus 1.4β with external resources as well.7 7The major difference between the NAIST Text Corpus 1.4β and 1.5 is the revision of the annotation criterion for the dative case (DAT) (corresponding to Japanese case marker “ に”). Argument and adjunct usages of the case marker “に” are not distinguished in 1.4β, making the identification of the dative case seemingly easy (Ouchi et al., 2015). 1598 Dep Zero NOM ACC DAT NOM ACC DAT NAIST Text Corpus 1.5 Imamura+ 09 86.50 92.84 30.97 45.56 21.38 0.83 Ouchi+ 15 88.13 92.74 38.39 48.11 24.43 4.80 Single-Seq 88.32 93.89 65.91 49.51 35.07 9.83 Multi-Seq 88.75 93.68 64.38 50.65 32.35 7.52 NAIST Text Corpus 1.4β Taira+ 08* 75.53 88.20 89.51 30.15 11.41 3.66 Imamura+ 09* 87.0 93.9 80.8 50.0 30.8 0.0 Sasano+ 11* 39.5 17.5 8.9 Table 4: Performance comparison for different case roles on the test set in F-measures. NOM, ACC or DAT is the nominal, accusative or dative case, respectively. The asterisk (*) indicates that the model uses external resources. Comparing the models using the NAIST Text Corpus 1.5, the single- and multi-sequence models outperformed the baseline models according to all metrics. In particular, for the dative case, the two neural models achieved much higher results, by approximately 30 points. This suggests that although dative arguments appear infrequently compared with the other two case arguments, the neural models can learn them robustly. In addition, for zero arguments (Zero), the neural models achieved better results than the baseline models. In particular, for zero arguments of the nominative case (NOM), the multisequence model demonstrated a considerable improvement of approximately 2.5 points according to the F-measure compared with the joint model in Ouchi et al. (2015). To achieve high accuracy for the analysis of such zero arguments, it is necessary to capture long distance dependencies (Iida et al., 2005; Sasano and Kurohashi, 2011; Iida et al., 2015). Therefore, the improvements of the results suggest that the neural models effectively capture long distance dependencies using RNNs that can encode the context of the entire sentence. 7 Conclusion In this work, we introduced neural sequence models that automatically induce effective feature representations from the word sequence information of a sentence for Japanese PAS analysis. The experiments on the NAIST Text Corpus demonstrated that the models realize high performance without the need for syntactic information. In particular, our multi-sequence model improved the performance of zero argument identification, one of the problematic issues facing Japanese PAS analysis, by considering the multi-predicate interactions with Grid-RNNs. Because our neural models are applicable to SRL, applying our models for multilingual SRL tasks presents an interesting future research direction. In addition, in this work, the model parameters were learned without any external resources. In future work, we plan to explore effective methods for exploiting large-scale unlabeled data to learn the neural models. Acknowledgments This work was partially supported by JST CREST Grant Number JPMJCR1513 and JSPS KAKENHI Grant Number 15K16053. We are grateful to the members of the NAIST Computational Linguistics Laboratory and the anonymous reviewers for their insightful comments. References Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas Bouchard, and Yoshua Bengio. 2012. Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop. Xavier Carreras and Llu´ıs M`arquez. 2005. Introduction to the CoNLL-2005 shared task: Semantic role labeling. In Proceedings of CoNLL. pages 152–164. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder 1599 for statistical machine translation. In Proceedings of EMNLP. pages 1724–1734. Ronan Collobert, Jason Weston, Leon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research . Alan Graves, Navdeep Jaitly, and Abdel-rahman Mohamed. 2013. Hybrid speech recognition with deep bidirectional LSTM. In Proceedings of Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop. Alex Graves, Santiago Fern´andez, and J¨urgen Schmidhuber. 2005. Bidirectional LSTM networks for improved phoneme classification and recognition. In Proceedings of International Conference on Artificial Neural Networks. pages 799–804. Alex Graves and J¨urgen Schmidhuber. 2009. Offline handwriting recognition with multidimensional recurrent neural networks. In Proceedings of NIPS. pages 545–552. Masatsugu Hangyo, Daisuke Kawahara, and Sadao Kurohashi. 2013. Japanese zero reference resolution considering exophora and author/reader mentions. In Proceedings of EMNLP. pages 924–934. Yuta Hayashibe, Mamoru Komachi, and Yuji Matsumoto. 2011. Japanese predicate argument structure analysis exploiting argument position and type. In Proceedings of IJCNLP. pages 201–209. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385 . Ryu Iida, Kentaro Inui, and Yuji Matsumoto. 2005. Anaphora resolution by antecedent identification followed by anaphoricity determination. ACM Transactions on Asian Language Information Processing (TALIP) 4(4):417–434. Ryu Iida, Mamoru Komachi, Kentaro Inui, and Yuji Matsumoto. 2007. Annotating a Japanese text corpus with predicate-argument and coreference relations. In Proceedings of the Linguistic Annotation Workshop. pages 132–139. Ryu Iida and Massimo Poesio. 2011. A cross-lingual ILP solution to zero anaphora resolution. In Proceedings of ACL-HLT. pages 804–813. Ryu Iida, Kentaro Torisawa, Chikara Hashimoto, JongHoon Oh, and Julien Kloetzer. 2015. Intrasentential zero anaphora resolution using subject sharing recognition. In Proceedings of EMNLP. pages 2179–2189. Ryu Iida, Kentaro Torisawa, Jong-Hoon Oh, Canasai Kruengkrai, and Julien Kloetzer. 2016. Intrasentential subject zero anaphora resolution using multi-column convolutional neural network. In Proceedings of EMNLP. pages 1244–1254. Kenji Imamura, Kuniko Saito, and Tomoko Izumi. 2009. Discriminative approach to predicateargument structure analysis with zero-anaphora resolution. In Proceedings of ACL-IJCNLP. pages 85– 88. Nal Kalchbrenner, Ivo Danihelka, and Alex Graves. 2016. Grid long short-term memory. In Proceedings of ICLR. D.P. Kingma and J. Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980. Hiroki Ouchi, Hiroyuki Shindo, Kevin Duh, and Yuji Matsumoto. 2015. Joint case argument identification for Japanese predicate argument structure analysis. In Proceedings of ACL-IJCNLP. pages 961– 970. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In Proceedings of ICML. Ryohei Sasano and Sadao Kurohashi. 2011. A discriminative approach to Japanese zero anaphora resolution with large-scale lexicalized case frames. In Proceedings of IJCNLP. pages 758–766. Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing pages 2673–2681. Tomohide Shibata, Daisuke Kawahara, and Sadao Kurohashi. 2016. Neural network-based model for Japanese predicate argument structure analysis. In Proceedings of ACL. pages 1235–1244. Hirotoshi Taira, Sanae Fujita, and Masaaki Nagata. 2008. A Japanese predicate argument structure analysis using decision lists. In Proceedings of EMNLP. pages 523–532. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144 . Haitong Yang and Chengqing Zong. 2014. Multipredicate semantic role labeling. In Proceedings of EMNLP. pages 363–373. Katsumasa Yoshikawa, Masayuki Asahara, and Yuji Matsumoto. 2011. Jointly extracting Japanese predicate-argument relation with markov logic. In Proceedings of IJCNLP. pages 1125–1133. Jie Zhou and Wei Xu. 2015. End-to-end learning of semantic role labeling using recurrent neural networks. In Proceedings of ACL-IJCNLP. 1600
2017
146
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1601–1611 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1147 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1601–1611 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1147 TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension Mandar Joshi† Eunsol Choi† Daniel S. Weld† Luke Zettlemoyer†‡ † Paul G. Allen School of Computer Science & Engineering, Univ. of Washington, Seattle, WA {mandar90, eunsol, weld, lsz}@cs.washington.edu ‡ Allen Institute for Artificial Intelligence, Seattle, WA [email protected] Abstract We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K questionanswer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a featurebased classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study.1 1 Introduction Reading comprehension (RC) systems aim to answer any question that could be posed against the facts in some reference text. This goal is challenging for a number of reasons: (1) the questions can be complex (e.g. have highly compositional semantics), (2) finding the correct answer can require complex reasoning (e.g. combining facts from multiple sentences or background knowledge) and (3) individual facts can be difficult to 1Data and code available at http://nlp.cs. washington.edu/triviaqa/ Question: The Dodecanese Campaign of WWII that was an attempt by the Allied forces to capture islands in the Aegean Sea was the inspiration for which acclaimed 1961 commando film? Answer: The Guns of Navarone Excerpt: The Dodecanese Campaign of World War II was an attempt by Allied forces to capture the Italianheld Dodecanese islands in the Aegean Sea following the surrender of Italy in September 1943, and use them as bases against the German-controlled Balkans. The failed campaign, and in particular the Battle of Leros, inspired the 1957 novel The Guns of Navarone and the successful 1961 movie of the same name. Question: American Callan Pinckney’s eponymously named system became a best-selling (1980s-2000s) book/video franchise in what genre? Answer: Fitness Excerpt: Callan Pinckney was an American fitness professional. She achieved unprecedented success with her Callanetics exercises. Her 9 books all became international best-sellers and the video series that followed went on to sell over 6 million copies. Pinckney’s first video release ”Callanetics: 10 Years Younger In 10 Hours” outsold every other fitness video in the US. Figure 1: Question-answer pairs with sample excerpts from evidence documents from TriviaQA exhibiting lexical and syntactic variability, and requiring reasoning from multiple sentences. recover from text (e.g. due to lexical and syntactic variation). Figure 1 shows examples of all these phenomena. This paper presents TriviaQA, a new reading comprehension dataset designed to simultaneously test all of these challenges. Recently, significant progress has been made by introducing large new reading comprehension datasets that primarily focus on one of the challenges listed above, for example by crowdsourcing the gathering of question answer pairs (Rajpurkar et al., 2016) or using cloze-style sentences instead of questions (Hermann et al., 2015; Onishi et al., 2016) (see Table 1 for more examples). In general, system performance has improved rapidly as each resource is released. The best models of1601 Dataset Large scale Freeform Answer Well formed Independent of Evidence Varied Evidence TriviaQA      SQuAD (Rajpurkar et al., 2016)      MS Marco (Nguyen et al., 2016)      NewsQA(Trischler et al., 2016)    *  WikiQA (Yang et al., 2016)      TREC (Voorhees and Tice, 2000)      Table 1: Comparison of TriviaQA with existing QA datasets. Our dataset is unique in that it is naturally occurring, well-formed questions collected independent of the evidences. *NewsQA uses evidence articles indirectly by using only article summaries. ten achieve near-human performance levels within months or a year, fueling a continual need to build ever more difficult datasets. We argue that TriviaQA is such a dataset, by demonstrating that a high percentage of its questions require solving these challenges and showing that there is a large gap between state-of-the-art methods and human performance levels. TriviaQA contains over 650K question-answerevidence triples, that are derived by combining 95K Trivia enthusiast authored question-answer pairs with on average six supporting evidence documents per question. To our knowledge, TriviaQA is the first dataset where full-sentence questions are authored organically (i.e. independently of an NLP task) and evidence documents are collected retrospectively from Wikipedia and the Web. This decoupling of question generation from evidence collection allows us to control for potential bias in question style or content, while offering organically generated questions from various topics. Designed to engage humans, TriviaQA presents a new challenge for RC models. They should be able to deal with large amount of text from various sources such as news articles, encyclopedic entries and blog articles, and should handle inference over multiple sentences. For example, our dataset contains three times as many questions that require inference over multiple sentences than the recently released SQuAD (Rajpurkar et al., 2016) dataset. Section 4 present a more detailed discussion of these challenges. Finally, we present baseline experiments on the TriviaQA dataset, including a linear classifier inspired by work on CNN Dailymail and MCTest (Chen et al., 2016; Richardson et al., 2013) and a state-of-the-art neural network baseline (Seo et al., 2017). The neural model performs best, but only achieves 40% for TriviaQA in comparison to 68% on SQuAD, perhaps due to the challenges listed above. The baseline results also fall far short of human performance levels, 79.7%, suggesting significant room for the future work. In summary, we make the following contributions. • We collect over 650K question-answerevidence triples, with questions originating from trivia enthusiasts independent of the evidence documents. A high percentage of the questions are challenging, with substantial syntactic and lexical variability and often requiring multi-sentence reasoning. The dataset and code are available at http://nlp.cs.washington. edu/triviaqa/, offering resources for training new reading-comprehension models. • We present a manual analysis quantifying the quality of the dataset and the challenges involved in solving the task. • We present experiments with two baseline methods, demonstrating that the TriviaQA tasks are not easily solved and are worthy of future study. • In addition to the automatically gathered large-scale (but noisy) dataset, we present a clean, human-annotated subset of 1975 question-document-answer triples whose documents are certified to contain all facts required to answer the questions. 2 Overview Problem Formulation We frame reading comprehension as the problem of answering a question q given the textual evidence provided by document set D. We assume access to a dataset of tuples {(qi, ai, Di)|i = 1 . . . n} where ai is a text string that defines the correct answer 1602 to question qi. Following recent formulations (Rajpurkar et al., 2016), we further assume that ai appears as a substring for some document in the set Di.2 However, we differ by setting Di as a set of documents, where previous work assumed a single document (Hermann et al., 2015) or even just a short paragraph (Rajpurkar et al., 2016). Data and Distant Supervision Our evidence documents are automatically gathered from either Wikipedia or more general Web search results (details in Section 3). Because we gather evidence using an automated process, the documents are not guaranteed to contain all facts needed to answer the question. Therefore, they are best seen as a source of distant supervision, based on the assumption that the presence of the answer string in an evidence document implies that the document does answer the question.3 Section 4 shows that this assumption is valid over 75% of the time, making evidence documents a strong source of distant supervision for training machine reading systems. In particular, we consider two types of distant supervision, depending on the source of our documents. For web search results, we expect the documents that contain the correct answer a to be highly redundant, and therefore let each questionanswer-document tuple be an independent data point. (|Di| = 1 for all i and qi = qj for many i, j pairs). However, in Wikipedia we generally expect most facts to be stated only once, so we instead pool all of the evidence documents and never repeat the same question in the dataset (|Di| = 1.8 on average and qi ̸= qj for all i, j). In other words, each question (paired with the union of all of its evidence documents) is a single data point. These are far from the only assumptions that could be made in this distant supervision setup. For example, our data would also support multiinstance learning, which makes the at least once assumption, from relation extraction (Riedel et al., 2010; Hoffmann et al., 2011) or many other possibilities. However, the experiments in Section 6 show that these assumptions do present a strong 2The data we will present in Section 3 would further support a task formulation where some documents D do not have the correct answer and the model must learn when to abstain. We leave this to future work. 3An example context for the first question in Figure 1 where such an assumption fails would be the following evidence string: The Guns of Navarone is a 1961 BritishAmerican epic adventure war film directed by J. Lee Thompson. Total number of QA pairs 95,956 Number of unique answers 40,478 Number of evidence documents 662,659 Avg. question length (word) 14 Avg. document length (word) 2,895 Table 2: TriviaQA: Dataset statistics. signal for learning; we believe the data will fuel significant future study. 3 Dataset Collection We collected a large dataset to support the reading comprehension task described above. First we gathered question-answer pairs from 14 trivia and quiz-league websites. We removed questions with less than four tokens, since these were generally either too simple or too vague. We then collected textual evidence to answer questions using two sources: documents from Web search results and Wikipedia articles for entities in the question. To collect the former, we posed each question4 as a search query to the Bing Web search API, and collected the top 50 search result URLs. To exclude the trivia websites, we removed from the results all pages from the trivia websites we scraped and any page whose url included the keywords trivia, question, or answer. We then crawled the top 10 search result Web pages and pruned PDF and other ill formatted documents. The search output includes a diverse set of documents such as blog articles, news articles, and encyclopedic entries. Wikipedia pages for entities mentioned in the question often provide useful information. We therefore collected an additional set of evidence documents by applying TAGME, an off-the-shelf entity linker (Ferragina and Scaiella, 2010), to find Wikipedia entities mentioned in the question, and added the corresponding pages as evidence documents. Finally, to support learning from distant supervision, we further filtered the evidence documents to exclude those missing the correct answer string and formed evidence document sets as described in Section 2. This left us with 95K questionanswer pairs organized into (1) 650K training examples for the Web search results, each contain4Note that we did not use the answer as a part of the search query to avoid biasing the results. 1603 Property Example annotation Statistics Avg. entities / question Which politician won the Nobel Peace Prize in 2009? 1.77 per question Fine grained answer type What fragrant essential oil is obtained from Damask Rose? 73.5% of questions Coarse grained answer type Who won the Nobel Peace Prize in 2009? 15.5% of questions Time frame What was photographed for the first time in October 1959 34% of questions Comparisons What is the appropriate name of the largest type of frog? 9% of questions Table 3: Properties of questions on 200 annotated examples show that a majority of TriviaQA questions contain multiple entities. The boldfaced words hint at the presence of corresponding property. Figure 2: Distribution of hierarchical WordNet synsets for entities appearing in the answer. The arc length is proportional to the number of questions containing that category. ing a single (combined) evidence document, and (2) 78K examples for the Wikipedia reading comprehension domain, containing on average 1.8 evidence documents per example. Table 2 contains the dataset statistics. While not the focus of this paper, we have also released the full unfiltered dataset which contains 110,495 QA pairs and 740K evidence documents to support research in allied problems such as open domain and IRstyle question answering. 4 Dataset Analysis A quantitative and qualitative analysis of TriviaQA shows it contains complex questions about a diverse set of entities, which are answerable using the evidence documents. Question and answer analysis TriviaQA questions, authored by trivia enthusiasts, cover various topics of people’s interest. The average question length is 14 tokens indicating that many questions are highly compositional. For qualitative analyType Percentage Numerical 4.17 Free text 2.98 Wikipedia title 92.85 Person 32 Location 23 Organization 5 Misc. 40 Table 4: Distribution of answer types on 200 annotated examples. sis, we sampled 200 question answer pairs and manually analysed their properties. About 73.5% of these questions contain phrases that describe a fine grained category to which the answer belongs, while 15.5% hint at a coarse grained category (one of person, organization, location, and miscellaneous). Questions often involve reasoning over time frames, as well as making comparisons. A summary of the analysis is presented in Table 3. Answers in TriviaQA belong to a diverse set of types. 92.85% of the answers are titles in Wikipedia,5 4.17% are numerical expressions (e.g., 9 kilometres) while the rest are open ended noun and verb phrases. A coarse grained type analysis of answers that are Wikipedia entities presented in Table 4. It should be noted that not all Wikipedia titles are named entities; many are common phrases such as barber or soup. Figure 2 shows diverse topics indicated by WordNet synsets of answer entities. Evidence analysis A qualitative analysis of TriviaQA shows that the evidence contains answers for 79.7% and 75.4% of questions from the Wikipedia and Web domains respectively. To analyse the quality of evidence and evaluate baselines, we asked a human annotator to answer 986 and 1345 (dev and test set) questions from the Wikipedia and Web domains respectively. Trivia 5This is a very large set since Wikipedia has more than 11 million titles. 1604 Reasoning Lexical variation (synonym) Major correspondences between the question and the answer sentence are synonyms. Frequency 41% in Wiki documents, 39% in web documents. Q What is solid CO2 commonly called? Examples S The frozen solid form of CO2, known as dry ice ... Q Who wrote the novel The Eagle Has landed? S The Eagle Has Landed is a book by British writer Jack Higgins Reasoning Lexical variation and world knowledge Major correspondences between the question and the document require common sense or external knowledge. Frequency 17% in Wiki documents, 17% in web documents. Q What is the first name of Madame Bovary in Flaubert’s 1856 novel? S Madame Bovary (1856) is the French writer Gustave Flaubert’s debut novel. The story focuses on a doctor’s Examples wife, Emma Bovary Q Who was the female member of the 1980’s pop music duo, Eurythmics? S Eurythmics were a British music duo consisting of members Annie Lennox and David A. Stewart. Reasoning Syntactic Variation After the question is paraphrased into declarative form, its syntactic dependency structure does not match that of the answer sentence Frequency 69% in Wiki documents, 65% in web documents. Q In which country did the Battle of El Alamein take place? Examples S The 1942 Battle of El Alamein in Egypt was actually two pivotal battles of World War II Q Whom was Ronald Reagan referring to when he uttered the famous phrase evil empire in a 1983 speech? S The phrase evil empire was first applied to the Soviet Union in 1983 by U.S. President Ronald Reagan. Reasoning Multiple sentences Requires reasoning over multiple sentences. Frequency 40% in Wiki documents, 35% in web documents. Q Name the Greek Mythological hero who killed the gorgon Medusa. S Perseus asks god to aid him. So the goddess Athena and Hermes helps him out to kill Medusa. Examples Q Who starred in and directed the 1993 film A Bronx Tale? S Robert De Niro To Make His Broadway Directorial Debut With A Bronx Tale: The Musical. The actor starred and directed the 1993 film. Reasoning Lists, Table Answer found in tables or lists Frequency 7% in web documents. Examples Q In Moh’s Scale of hardness, Talc is at number 1, but what is number 2? Q What is the collective name for a group of hawks or falcons? Table 5: Analysis of reasoning used to answer TriviaQA questions shows that a high proportion of evidence sentence(s) exhibit syntactic and lexical variation with respect to questions. Answers are indicated by boldfaced text. questions contain multiple clues about the answer(s) not all of which are referenced in the documents. The annotator was asked to answer a question if the minimal set of facts (ignoring temporal references like this year) required to answer the question are present in the document, and abstain otherwise. For example, it is possible to answer the question, Who became president of the Mormons in 1844, organised settlement of the Mormons in Utah 1847 and founded Salt Lake City? using only the fact that Salt Lake City was founded by Brigham Young. We found that the accuracy (evaluated using the original answers) for the Wikipedia and Web domains was 79.6 and 75.3 respectively. We use the correctly answered questions (and documents) as verified sets for evaluation (section 6). Challenging problem A comparison of evidence with respect to the questions shows a high proportion of questions require reasoning over multiple sentences. To compare our dataset against previous datasets, we classified 100 question-evidence pairs each from Wikipedia and the Web according to the form of reasoning required to answer them. We focus the analysis on Wikipedia since the analysis on Web documents are similar. Categories are not mutually exclusive: single example can fall into multiple categories. A summary of the analysis is presented in Table 5. On comparing evidence sentences with their corresponding questions, we found that 69% of the questions had a different syntactic structure while 41% were lexically different. For 40% of the questions, we found that the information re1605 quired to answer them was scattered over multiple sentences. Compared to SQuAD, over three times as many questions in TriviaQA require reasoning over multiple sentences. Moreover, 17% of the examples required some form of world knowledge. Question-evidence pairs in TriviaQA display more lexical and syntactic variance than SQuAD. This supports our earlier assertion that decoupling question generation from evidence collection results in a more challenging problem. 5 Baseline methods To quantify the difficulty level of the dataset for current methods, we present results on neural and other models. We used a random entity baseline and a simple classifier inspired from previous work (Wang et al., 2015; Chen et al., 2016), and compare these to BiDAF (Seo et al., 2017), one of the best performing models for the SQuAD dataset. 5.1 Random entity baseline We developed the random entity baseline for the Wikipedia domain since the provided documents can be directly mapped to candidate answers. In this heuristic approach, we first construct a candidate answer set using the entities associated with the provided Wikipedia pages for a given question (on average 1.8 per question). We then randomly pick a candidate that does not occur in the question. If no such candidate exists, we pick any random candidate from the candidate set. 5.2 Entity classifier We also frame the task as a ranking problem over candidate answers in the documents. More formally, given a question qi, an answer a+ i , and a evidence document Di, we want to learn a scoring function score, such that score(a+ i |qi, Di) > score(a− i |qi, Di) where a− i is any candidate other than the answer. The function score is learnt using LambdaMART (Wu et al., 2010),6 a boosted tree based ranking algorithm. This is similar to previous entity-centric classifiers for QA (Chen et al., 2016; Wang et al., 2015), and uses context and Wikipedia catalog based features. To construct the candidate answer set, we 6We use the RankLib implementation https:// sourceforge.net/p/lemur/wiki/RankLib/ consider sentences that contain at least one word in common with the question. We then add every n-gram (n ∈[1, 5]) that occurs in these sentences and is a title of some Wikipedia article.7 5.3 Neural model Recurrent neural network models (RNNs) (Hermann et al., 2015; Chen et al., 2016) have been very effective for reading comprehension. For our task, we modified the BiDAF model (Seo et al., 2017), which takes a sequence of context words as input and outputs the start and end positions of the predicted answer in the context. The model utilizes an RNN at the character level, token level, and phrase level to encode context and question and uses attention mechanism between question and context. Authored independently from the evidence document, TriviaQA does not contain the exact spans of the answers. We approximate the answer span by finding the first match of answer string in the evidence document. Developed for a dataset where the evidence document is a single paragraph (average 122 words), the BiDAF model does not scale to long documents. To overcome this, we truncate the evidence document to the first 800 words.8 When the data contains more than one evidence document, as in our Wikipedia domain, we predict for each document separately and aggregate the predictions by taking a sum of confidence scores. More specifically, when the model outputs a candidate answer Ai from n documents Di,1, ...Di,n with confidences ci,1, ...ci,n, the score of Ai is given by score(Ai) = X k ci,k We select candidate answer with the highest score. 6 Experiments An evaluation of our baselines shows that both of our tasks are challenging, and that the TriviaQA dataset supports significant future work. 7Using a named entity recognition system to generate candidate entities is not feasible as answers can be common nouns or phrases. 8We found that splitting documents into smaller sub documents degrades performance since a majority of sub documents do not contain the answer. 1606 Train Dev Test Wikipedia Questions 61,888 7,993 7,701 Documents 110,648 14,229 13,661 Web Questions 76,496 9,951 9,509 Documents 528,979 68,621 65,059 Wikipedia verified Questions 297 584 Documents 305 592 Web Questions 322 733 verified Documents 325 769 Table 6: Data statistics for each task setup. The Wikipedia domain is evaluated over questions while the web domain is evaluated over documents. 6.1 Evaluation Metrics We use the same evaluation metrics as SQuAD – exact match (EM) and F1 over words in the answer(s). For questions that have Numerical and FreeForm answers, we use a single given answer as ground truth. For questions that have Wikipedia entities as answers, we use Wikipedia aliases as valid answer along with the given answer. Since Wikipedia and the web are vastly different in terms of style and content, we report performance on each source separately. While using Wikipedia, we evaluate at the question level since facts needed to answer a question are generally stated only once. On the other hand, due to high information redundancy in web documents (around 6 documents per question), we report document level accuracy and F1 when evaluating on web documents. Lastly, in addition to distant supervision, we also report evaluation on the clean dev and test questions collection using a human annotator (section 4) 6.2 Experimental Setup We randomly partition QA pairs in the dataset into train (80%), development (10%), and test set (10%). In addition to distant supervision evaluation, we also evaluate baselines on verified subsets (see section 4) of the dev and test partitions. Table 6 contains the number of questions and documents for each task. We trained the entity classifier on a random sample of 50,000 questions from the training set. For training BiDAF on the web domain, we first randomly sampled 80,000 documents. For both domains, we used only those (training) documents where the answer appears in the first 400 tokens to keep training time manageable. Designing scalable techniques that can use the entirety of the data is an interesting direction for future work. 6.3 Results The performance of the proposed models is summarized in Table 7. The poor performance of the random entity baseline shows that the task is not already solved by information retrieval. For both Wikipedia and web documents, BiDAF (40%) outperforms the classifier (23%). The oracle score is the upper bound on the exact match accuracy.9 All models lag significantly behind the human baseline of 79.7% on the Wikipedia domain, and 75.4% on the web domain. We analyse the performance of BiDAF on the development set using Wikipedia as the evidence source by question length and answer type. The accuracy of the system steadily decreased as the length of the questions increased – with 50% for questions with 5 or fewer words to 32% for 20 or more words. This suggests that longer compositional questions are harder for current methods. 6.4 Error analysis Our qualitative error analysis reveals that compositionality in questions and lexical variation and low signal-to-noise ratio in (full) documents is still a challenge for current methods. We randomly sampled 100 incorrect BiDAF predictions from the development set and used Wikipedia evidence documents for manual analysis. We found that 19 examples lacked evidence in any of the provided documents, 3 had incorrect ground truth, and 3 were valid answers that were not included in the answer key. Furthermore, 12 predictions were partially correct (Napoleonic vs Napoleonic Wars). This seems to be consistent with human performance of 79.7%. For the rest, we classified each example into one or more categories listed in Table 8. Distractor entities refers to the presence of entities similar to ground truth. E.g., for the question, Rebecca Front plays Detective Chief Superintendent Innocent in which TV series?, the evidence describes all roles played by Rebecca Front. The first two rows suggest that long and noisy documents make the question answering task more difficult, as compared for example to the short passages in SQuAD. Furthermore, a high proportion of errors are caused by paraphrasing, and the answer is sometimes stated indirectly. For 9A question q is considered answerable for the oracle score if the correct answer is found in the evidence D or, in case of the classifier, is a part of the candidate set. Since we truncate documents, the upper bound is not 100%. 1607 Distant Supervision Verified Method Domain Dev Test Dev Test EM F1 Oracle EM F1 Oracle EM F1 Oracle EM F1 Oracle Random 12.72 22.91 16.30 12.74 22.35 16.28 14.81 23.31 19.53 15.41 25.44 19.19 Classifier Wiki 23.42 27.68 71.41 22.45 26.52 71.67 24.91 29.43 80.13 27.23 31.37 77.74 BiDAF 40.26 45.74 82.55 40.32 45.91 82.82 47.47 53.70 90.23 44.86 50.71 86.81 Classifier web 24.64 29.08 66.78 24.00 28.38 66.35 27.38 31.91 77.23 30.17 34.67 76.72 BiDAF 41.08 47.40 82.93 40.74 47.05 82.95 51.38 55.47 90.46 49.54 55.80 89.99 Table 7: Performance of all systems on TriviaQA using distantly supervised evaluation. The best performing system is indicated in bold. Category Proportion Insufficient evidence 19 Prediction from incorrect document(s) 7 Answer not in clipped document 15 Paraphrasing 29 Distractor entities 11 Reasoning over multiple sentences 18 Table 8: Qualitative error analysis of BiDAF on Wikipedia evidence documents. example, the evidence for the question What was Truman Capote’s last name before he was adopted by his stepfather? consists of the following text Truman Garcia Capote born Truman Streckfus Persons, was an American ... In 1933, he moved to New York City to live with his mother and her second husband, Joseph Capote, who adopted him as his stepson and renamed him Truman Garca Capote. 7 Related work Recent interest in question answering has resulted in the creation of several datasets. However, they are either limited in scale or suffer from biases stemming from their construction process. We group existing datasets according to their associated tasks, and compare them against TriviaQA. The analysis is summarized in Table 1. 7.1 Reading comprehension Reading comprehension tasks aims to test the ability of a system to understand a document using questions based upon its contents. Researchers have constructed cloze-style datasets (Hill et al., 2015; Hermann et al., 2015; Paperno et al., 2016; Onishi et al., 2016), where the task is to predict missing words, often entities, in a document. Cloze-style datasets, while easier to construct large-scale automatically, do not contain natural language questions. Datasets with natural language questions include MCTest (Richardson et al., 2013), SQuAD (Rajpurkar et al., 2016), and NewsQA (Trischler et al., 2016). MCTest is limited in scale with only 2640 multiple choice questions. SQuAD contains 100K crowdsourced questions and answers paired with short Wikipedia passages. NewsQA uses crowdsourcing to create questions solely from news article summaries in order to control potential bias. The crucial difference between SQuAD/NewsQA and TriviaQA is that TriviaQA questions have not been crowdsourced from preselected passages. Additionally, our evidence set consists of web documents, while SQuAD and NewsQA are limited to Wikipedia and news articles respectively. Other recently released datasets include (Lai et al., 2017). 7.2 Open domain question answering The recently released MS Marco dataset (Nguyen et al., 2016) also contains independently authored questions and documents drawn from the search results. However, the questions in the dataset are derived from search logs and the answers are crowdsourced. On the other hand, trivia enthusiasts provided both questions and answers for our dataset. Knowledge base question answering involves converting natural language questions to logical forms that can be executed over a KB. Proposed datasets (Cai and Yates, 2013; Berant et al., 2013; Bordes et al., 2015) are either limited in scale or in the complexity of questions, and can only retrieve facts covered by the KB. A standard task for open domain IR-style QA is the annual TREC competitions (Voorhees and Tice, 2000), which contains questions from various domains but is limited in size. Many advances from the TREC competitions were used in the IBM Watson system for Jeopardy! (Ferrucci et al., 2010). Other datasets includes SearchQA 1608 (Dunn et al., 2017) where Jeopardy! questions are paired with search engine snippets, the WikiQA dataset (Yang et al., 2015) for answer sentence selection, and the Chinese language WebQA (Li et al., 2016) dataset, which focuses on the task of answer phrase extraction. TriviaQA contains examples that could be used for both stages of the pipeline, although our focus on this paper is instead on using the data for reading comprehension where the answer is always present. Other recent approaches attempt to combine structured high precision KBs with semistructured information sources like OpenIE triples (Fader et al., 2014), HTML tables (Pasupat and Liang, 2015), and large (and noisy) corpora (Sawant and Chakrabarti, 2013; Joshi et al., 2014; Xu et al., 2015). TriviaQA, which has Wikipedia entities as answers, makes it possible to leverage structured KBs like Freebase, which we leave to future work. Furthermore, about 7% of the TriviaQA questions have answers in HTML tables and lists, which could be used to augment these existing resources. Trivia questions from quiz bowl have been previously used in other question answering tasks (Boyd-Graber et al., 2012). Quiz bowl questions are paragraph length and pyramidal.10 A number of different aspects of this problem have been carefully studied, typically using classifiers over a pre-defined set of answers (Iyyer et al., 2014) and studying incremental answering to answer as quickly as possible (Boyd-Graber et al., 2012) or using reinforcement learning to model opponent behavior (He et al., 2016). These competitive challenges are not present in our single-sentence question setting. Developing joint models for multisentence reasoning for questions and answer documents is an important area for future work. 8 Conclusion and Future Work We present TriviaQA, a new dataset of 650K question-document-evidence triples. To our knowledge, TriviaQA is the first dataset where questions are authored by trivia enthusiasts, independently of the evidence documents. The evidence documents come from two domains – Web search results and Wikipedia pages – with highly differing levels of information redundancy. Results from current state-of-the-art baselines indi10Pyramidal questions consist of a series of clues about the answer arranged in order from most to least difficult. cate that TriviaQA is a challenging testbed that deserves significant future study. While not the focus of this paper, TriviaQA also provides a provides a benchmark for a variety of other tasks such as IR-style question answering, QA over structured KBs and joint modeling of KBs and text, with much more data than previously available. Acknowledgments This work was supported by DARPA contract FA8750-13-2-0019, the WRF/Cable Professorship, gifts from Google and Tencent, and an Allen Distinguished Investigator Award. The authors would like to thank Minjoon Seo for the BiDAF code, and Noah Smith, Srinivasan Iyer, Mark Yatskar, Nicholas FitzGerald, Antoine Bosselut, Dallas Card, and anonymous reviewers for helpful comments. References Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013, 1821 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Special Interest Group of the ACL. pages 1533–1544. http://aclweb.org/anthology/D/D13/D13-1160.pdf. Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. CoRR abs/1506.02075. https://arxiv.org/abs/1506.02075. Jordan Boyd-Graber, Brianna Satinoff, He He, and Hal Daum´e III. 2012. Besting the quiz master: Crowdsourcing incremental classification games. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguistics, Jeju Island, Korea, pages 1290–1301. http://www.aclweb.org/anthology/D12-1118. Qingqing Cai and Alexander Yates. 2013. Large-scale semantic parsing via schema matching and lexicon extension. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Sofia, Bulgaria, pages 423–433. http://www.aclweb.org/anthology/P13-1042. Danqi Chen, Jason Bolton, and Christopher D. Manning. 2016. A thorough examination of the 1609 cnn/daily mail reading comprehension task. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 2358–2367. http://www.aclweb.org/anthology/P16-1223. Matthew Dunn, Levent Sagun, Mike Higgins, Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017. Searchqa: A new q&a dataset augmented with context from a search engine. CoRR https://arxiv.org/abs/1704.05179. Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2014. Open question answering over curated and extracted knowledge bases. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, New York, NY, USA, KDD ’14, pages 1156–1165. https://doi.org/10.1145/2623330.2623677. Paolo Ferragina and Ugo Scaiella. 2010. Tagme: On-the-fly annotation of short text fragments (by wikipedia entities). In Proceedings of the 19th ACM International Conference on Information and Knowledge Management. ACM, New York, NY, USA, CIKM ’10, pages 1625–1628. https://doi.org/10.1145/1871437.1871689. David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A. Kalyanpur, Adam Lally, J. William Murdock, Eric Nyberg, John Prager, Nico Schlaefer, and Chris Welty. 2010. Building watson: An overview of the deepqa project. AI MAGAZINE 31(3):59–79. He He, Jordan Boyd-Graber, Kevin Kwok, and Hal Daum´e III. 2016. Opponent modeling in deep reinforcement learning. In Maria Florina Balcan and Kilian Q. Weinberger, editors, Proceedings of The 33rd International Conference on Machine Learning. PMLR, New York, New York, USA, volume 48 of Proceedings of Machine Learning Research, pages 1804–1813. http://proceedings.mlr.press/v48/he16.html. Karl Moritz Hermann, Tom´aˇs Koˇcisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems. http://arxiv.org/abs/1506.03340. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading children’s books with explicit memory representations. CoRR https://arxiv.org/abs/1511.02301. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S. Weld. 2011. Knowledge-based weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Portland, Oregon, USA, pages 541–550. http://www.aclweb.org/anthology/P11-1055. Mohit Iyyer, Jordan Boyd-Graber, Leonardo Claudino, Richard Socher, and Hal Daum´e III. 2014. A neural network for factoid question answering over paragraphs. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Doha, Qatar, pages 633–644. http://www.aclweb.org/anthology/D14-1070. Mandar Joshi, Uma Sawant, and Soumen Chakrabarti. 2014. Knowledge graph and corpus driven segmentation and answer inference for telegraphic entityseeking queries. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Doha, Qatar, pages 1104–1114. http://www.aclweb.org/anthology/D14-1117. Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. Race: Large-scale reading comprehension dataset from examinations. CoRR https://arxiv.org/abs/1704.04683. Peng Li, Wei Li, Zhengyan He, Xuguang Wang, Ying Cao, Jie Zhou, and Wei Xu. 2016. Dataset and neural recurrent sequence labeling model for open-domain factoid question answering. CoRR https://arxiv.org/abs/1607.06275. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. In Workshop in Advances in Neural Information Processing Systems. https://arxiv.org/pdf/1611.09268.pdf. Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. 2016. Who did what: A large-scale person-centered cloze dataset. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 2230–2235. https://aclweb.org/anthology/D161241. Denis Paperno, Germ´an Kruszewski, Angeliki Lazaridou, Ngoc Quan Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernandez. 2016. The lambada dataset: Word prediction requiring a broad discourse context. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1525– 1534. http://www.aclweb.org/anthology/P16-1144. Panupong Pasupat and Percy Liang. 2015. Compositional semantic parsing on semi-structured tables. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the 1610 Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers. pages 1470–1480. http://aclweb.org/anthology/P/P15/P15-1142.pdf. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 2383–2392. https://aclweb.org/anthology/D16-1264. Matthew Richardson, Christopher J.C. Burges, and Erin Renshaw. 2013. MCTest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Seattle, Washington, USA, pages 193–203. http://www.aclweb.org/anthology/D13-1020. Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In Proceedings of the 2010 European Conference on Machine Learning and Knowledge Discovery in Databases: Part III. Springer-Verlag, Berlin, Heidelberg, ECML PKDD’10, pages 148–163. http://dl.acm.org/citation.cfm?id=1889788.1889799. Uma Sawant and Soumen Chakrabarti. 2013. Learning joint query interpretation and response ranking. In Proceedings of the 22Nd International Conference on World Wide Web. ACM, New York, NY, USA, WWW ’13, pages 1099–1110. https://doi.org/10.1145/2488388.2488484. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In Proceedings of the International Conference on Learning Representations (ICLR). https://arxiv.org/abs/1611.01603. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2016. Newsqa: A machine comprehension dataset. CoRR https://arxiv.org/abs/1611.09830. Ellen M. Voorhees and Dawn M. Tice. 2000. Building a question answering test collection. In Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, New York, NY, USA, SIGIR ’00, pages 200–207. https://doi.org/10.1145/345508.345577. Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. 2015. Machine comprehension with syntax, frames, and semantics. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). Association for Computational Linguistics, Beijing, China, pages 700–706. http://www.aclweb.org/anthology/P15-2115. Qiang Wu, Christopher J. Burges, Krysta M. Svore, and Jianfeng Gao. 2010. Adapting boosting for information retrieval measures. Inf. Retr. 13(3):254–270. https://doi.org/10.1007/s10791-009-9112-1. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of the International Conference on Machine Learning. https://arxiv.org/abs/1502.03044. Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. Wikiqa: A challenge dataset for open-domain question answering. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 2013–2018. http://aclweb.org/anthology/D15-1237. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California, pages 1480–1489. http://www.aclweb.org/anthology/N16-1174. 1611
2017
147
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1612–1622 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1148 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1612–1622 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1148 Learning Semantic Correspondences in Technical Documentation Kyle Richardson and Jonas Kuhn Institute of Natural Language Processing University of Stuttgart {kyle,jonas}@ims.uni-stuttgart.de Abstract We consider the problem of translating high-level textual descriptions to formal representations in technical documentation as part of an effort to model the meaning of such documentation. We focus specifically on the problem of learning translational correspondences between text descriptions and grounded representations in the target documentation, such as formal representation of functions or code templates. Our approach exploits the parallel nature of such documentation, or the tight coupling between high-level text and the low-level representations we aim to learn. Data is collected by mining technical documents for such parallel text-representation pairs, which we use to train a simple semantic parsing model. We report new baseline results on sixteen novel datasets, including the standard library documentation for nine popular programming languages across seven natural languages, and a small collection of Unix utility manuals. 1 Introduction Technical documentation in the computer domain, such as source code documentation and other howto manuals, provide high-level descriptions of how lower-level computer programs and utilities work. Often these descriptions are coupled with formal representations of these lower-level features, expressed in the target programming languages. For example, Figure 1.1 shows the source code documentation (in red/bold) for the max function in the Java programming language paired with the representation of this function in the underlying Java language (in black). This formal representation captures the name of the function, the return 1. Java Documentation *Returns the greater of two long values * * @param a an argument * @param b another argument * @return the larger of a and b * @see java.lang.Long#MAX VALUE */ public static long max(long a, long b) 2. Clojure Documentation (defn random-sample "Returns items from coll with random probability of prob (0.0 - 1.0)" ([prob coll] ...)) 3. PHP documentation (French) Ajoute une valeur comme dernier ´el´ement * * @param value La valeur ´a ajouter * @see ArrayIterations::next() */ public void append(mixed $value) Figure 1: Example source code documentation. value, the types of arguments the function takes, among other details related to the function’s place and visibility in the overall source code collection or API. Given the high-level nature of the textual annotations, modeling the meaning of any given description is not an easy task, as it involves much more information than what is directly provided in the associated documentation. For example, capturing the meaning of the description the greater of might require having a background theory about quantity/numbers and relations between different quantities. A first step towards capturing the meaning, however, is learning to translate this description to symbols in the target representation, in this case to the max symbol. By doing this translation to a formal language, modeling and learning the subsequent semantics becomes easier since we are eliminating the ambiguity of ordinary lan1612 Unix Utility Manual NAME : dappprof profile user and lib function usage. SYNOPSIS dappprof [-ac] -p PID | command DESCRIPTION -p PID examine the PID ... EXAMPLES Print elapsed time for PID 1871 dappprof -p PID=1871 SEE ALSO: dapptrace(1M), dtrace(1M), ... Figure 2: An example computer utility manual in the Unix domain. Descriptions of example uses are shown in red. guage. Similarly, we would want to first translate the description two long values, which specifies the number and type of argument taken by this function, to the sequence long a,long b. By focusing on translation, we can create new datasets by mining these types of source code collections for sets of parallel text-representation pairs. Given the wide variety of available programming languages, many such datasets can be constructed, each offering new challenges related to differences in the formal representations used by different programming languages. Figure 1.2 shows example documentation for the Clojure programming language, which is part of the Lisp family of languages. In this case, the description Returns random probability of should be translated to the function name random-sample since it describes what the overall function does. Similarly, the argument descriptions from coll and of prob should translate to coll and prob. Given the large community of programmers around the world, many source code collections are available in languages other than English. Figure 1.3 shows an example entry from the French version of the PHP standard library, which was translated by volunteer developers. Having multilingual data raises new challenges, and broadens the scope of investigations into this type of semantic translation. Other types of technical documentation, such as utility manuals, exhibit similar features. Figure 2 shows an example manual in the domain of Unix utilities. The textual description in red/bold describes an example use of the dappprof utility paired with formal representations in the form of executable code. As with the previous examples, such formal representations do not capture the full meaning of the different descriptions, but serve as a convenient operationalization, or translational semantics, of the meaning in Unix. Print elapsed time, for example, roughly describes what the dappprof utility does, whereas PID 1871 describes the second half of the code sequence. In both types of technical documentation, information is not limited to raw pairs of descriptions and representations, but can include other information and clues that are useful for learning. Java function annotations include textual descriptions of individual arguments and return values (shown in green). Taxonomic information and pointers to related functions or utilities are also annotated (e.g., the @see section in Figure 1, or SEE ALSO section in Figure 2). Structural information about code sequences, and the types of abstract arguments these sequences take, are described in the SYNOPSIS section of the Unix manual. This last piece of information allows us to generate abstract code templates, and generalize individual arguments. For example, the raw argument 1871 in the sequence dappprof -p 1871 can be typed as a PID instance, and an argument of the -p flag. Given this type of data, a natural experiment is to see whether we can build programs that translate high-level textual descriptions to correct formal representations. We aim to learn these translations using raw text-meaning pairs as the sole supervision. Our focus is on learning function translations or representations within nine programming language APIs, each varying in size, representation style, and source natural language. To our knowledge, our work is the first to look at translating source code descriptions to formal representations using such a wide variety of programming and natural languages. In total, we introduce fourteen new datasets in the source code domain that include seven natural languages, and report new results for an existing dataset. As well, we look at learning simple code templates using a small collection of English Unix manuals. The main goal of this paper is to establish strong baselines results on these resources, which we hope can be used for benchmarking and developing new semantic parsing methods. We achieved initial baselines using the language modeling and translation approach of Deng and Chrupała (2014). We also show that modest improvements can be achieved by using a more conventional 1613 discriminative model (Zettlemoyer and Collins, 2009) that, in part, exploits document-level features from the technical documentation sets. 2 Related Work Our work is situated within research on semantic parsing, which focuses on the problem of generating formal meaning representations from text for natural language understanding applications. Recent interest in this topic has centered around learning meaning representation from example text-meaning pairs, for applications such as automated question-answering (Berant et al., 2013), robot control (Matuszek et al., 2012) and text generation (Wong and Mooney, 2007a). While generating representations for natural language understanding is a complex task, most studies focus on the translation or generation problem independently of other semantic or knowledge representation issues. Earlier work looks at supervised learning of logical representations using example text-meaning pairs using tools from statistical machine translation (Wong and Mooney, 2006) and parsing (Zettlemoyer and Collins, 2009). These methods are meant to be applicable to a wide range of translation problems and representation types, which make new parallel datasets or resources useful for furthering the research. In general, however, such datasets are hard to construct since building them requires considerable domain knowledge and knowledge of logic. Alternatively, we construct parallel datasets automatically from technical documentation, which obviates the need for annotation. While the formal representations are not actual logical forms, they still provide a good test case for testing how well semantic parsers learn translations to representations. To date, most benchmark datasets are limited to small controlled domains, such as geography and navigation. While attempts have been made to do open-domain semantic parsing using larger, more complex datasets (Berant et al., 2013; Pasupat and Liang, 2015), such resources are still scarce. In Figure 3, we compare the details of one widely used dataset, Geoquery (Zelle and Mooney, 1996), to our new datasets. Our new resources are on average much larger than geoquery in terms of the number of example pairs, and the size of the different language vocabularies. Most existing datasets are also primarily English-based, while we focus on learning in a multilingual setting using several new moderately sized datasets. Within semantic parsing, there has also been work on situated or grounded learning, that involves learning in domains with weak supervision and indirect cues (Liang, 2016; Richardson and Kuhn, 2016). This has sometimes involved learning from automatically generated parallel data and representations (Chen and Mooney, 2008) of the type we consider in this paper. Here one can find work in technical domains, including learning to generate regular expressions (Manshadi et al., 2013; Kushman and Barzilay, 2013) and other types of source code (Quirk et al., 2015), which ultimately aim to solve the problem of natural language programming. We view our work as one small step in this general direction. Our work is also related to software components retrieval and builds on the approach of Deng and Chrupała (2014). Robustly learning the translation from language to code representations can help to facilitate natural language querying of API collections (Lv et al., 2015). As part of this effort, recent work in machine learning has focused on the similar problem of learning code representations using resources such as StackOverflow and Github. These studies primarily focus on learning longer programs (Allamanis et al., 2015) as opposed to function representations, or focus narrowly on a single programming language such as Java (Gu et al., 2016) or on related tasks such as text generation (Iyer et al., 2016; Oda et al., 2015). To our knowledge, none of this work has been applied to languages other than English or such a wide variety of programming languages. 3 Mapping Text to Representations In this section, we formulate the basic problem of translating to representations in technical documentation. 3.1 Problem Description We use the term technical documentation to refer to two types of resources: textual descriptions inside of source code collections, and computer utility manuals. In this paper, the first type includes high-level descriptions of functions in standard library source code documentation. The second type includes a collection of Unix manuals, also known as man pages. Both types include pairs of text and code representations. 1614 Dataset #Pairs #Descr. Symbols#Words Vocab. Example Pairs (x, z), Goal: learn a function x →z Java 7,183 4,804 4,072 82,696 3,721 x : Compares this Calendar to the specified Object. z : boolean util.Calendar.equals(Object obj) Ruby 6,885 1,849 3,803 67,274 5,131 x : Computes the arc tangent given y and x. z : Math.atan2(y,x) →Float PHPen 6,611 13,943 8,308 68,921 4,874 x : Delete an entry in the archive using its name. z : bool ZipArchive::deleteName(string $name) Python 3,085 429 3,991 27,012 2,768 x : Remove the specific filter from this handler. z : logging.Filterer.removeFilter(filter) Elisp 2,089 1,365 1,883 30,248 2,644 x : This function returns the total height, in lines, of the window. z : (window-total-height window round) Haskell 1,633 255 1,604 19,242 2,192 x : Extract the second component of a pair. z : Data.Tuple.snd :: (a, b) -> b Clojure 1,739 – 2,569 17,568 2,233 x : Returns a lazy seq of every nth item in coll. z : (core.take-nth n coll) C 1,436 1,478 1,452 12,811 1,835 x : Returns the current file position of the stream stream. z : long int ftell(FILE *stream) Scheme 1,301 376 1,343 15,574 1,756 x : Returns a new port with type port-type and the given state. z : (make-port port-type state) Unix 921 940 1,000 11,100 2,025 x : To get policies for a specific user account. z : pwpolicy -u username -getpolicy Geoquery 880 – 167 6,663 279 x : What is the tallest mountain in America? z : (highest(mountain(loc 2(countryid usa)))) Figure 3: Description of our English corpus collection with example text/function pairs. We will refer to the target representations in these resources as API components, or components. In source code, components are formal representations of functions, or function signatures (Deng and Chrupała, 2014). The form of a function signature varies depending on the resource, but in general gives a specification of how a function is named and structured. The example function signatures in Figure 3 all specify a function name, a list of arguments, and other optional information such as a return value and a namespace. Components in utility manuals are short executable code sequences intended to show an example use of a utility. We assume typed code sequences following Richardson and Kuhn (2014), where the constituent parts of the sequences are abstracted by type. Given a set of example text-component pairs, D = {(xi, zi)}n i=1, the goal is to learn how to generate correct, well-formed components z ∈C for each input x. Viewed as a semantic parsing problem, this treats the target components as a kind of formal meaning representation, analogous to a logical form. In our experiments, we assume that the complete set of output components are known. In the API documentation sets, this is because each standard library contains a finite number of function representations, roughly corresponding to the number of pairs as shown in Figure 3. For a given input, therefore, the goal is to find the best candidate function translation within the space of the total API components C (Deng and Chrupała, 2014). Given these constraints, our setup closely resembles that of Kushman et al. (2014), who learn to parse algebra word problems using a small set of equation templates. Their approach is inspired by template-based information extraction, where templates are recognized and instantiated by slotfilling. Our function signatures and code templates have a similar slot-like structure, consisting of slots such as return value, arguments, function name and namespace. 3.2 Language Modeling Baselines Existing approaches to semantic parsing formalize the mapping from language to logic using a variety of formalisms including CFGs (B¨orschinger et al., 2011), CCGs (Kwiatkowski et al., 2010), synchronous CFGs (Wong and Mooney, 2007b). Deciding to use one formalism over another is often motivated by the complexities of the target representations being learned. For example, recent interest in learning graph-based representations such as those in the AMR bank (Banarescu et al., 2013) 1615 requires parsing models that can generate complex graph shaped derivations such as CCGs (Artzi et al., 2015) or HRGs (Peng et al., 2015). Given the simplicity of our API representations, we opt for a simple semantic parsing model that exploits the finiteness of our target representations. Following ((Deng and Chrupała, 2014); henceforth DC), we treat the problem of component translation as a language modeling problem (Song and Croft, 1999). For a given query sequence or text x = wi, .., wI and component sequence z = uj, .., uJ, the probability of the component given the query is defined as follows using Bayes’ theorem: p(z|x) ∝p(x|z)p(z). By assuming a uniform prior over the probability of each component p(z), the problem reduces to computing p(x|z), which is where language modeling is used. Given each word wi in the query, a unigram model is defined as p(x|z) = ∏I i=1 p(wi|z). Using this formulation, we can then define different models to estimate p(w|z). Term Matching As a baseline for p(w|z), DC define a term matching approach that exploits the fact that many queries in our English datasets share vocabulary with target component vocabulary. A smoothed version of this baseline is defined below, where f(w|z) is the frequency of matching terms in the target signature, f(w|C) is frequency of the term word in the overall documentation collection, and λ is a smoothing parameter (for Jelinek-Mercer smoothing): p(x|z) = ∏ w∈x (1 −λ)f(w|z) + λf(w|C) Translation Model In order to account for the co-occurrence between non-matching words and component terms, DC employ a word-based translation model, which models the relation between natural language words wj and individual component terms uj. In this paper, we limit ourselves to sequence-based word alignment models (Och and Ney, 2003), which factor in the following manner: p(x|z) = I∏ i=1 J ∑ j=0 pt(wi|uj)pd(l(j)|i, I, J) Here each pt(wi|uj) defines an (unsmoothed) multinomial distribution over a given component term uj for all words wj. The function pd is a distortion parameter, and defines a dependency between the alignment positions and the lengths of Algorithm 1 Rank Decoder Input: Query x, Components C of size m, rank k, model A, sort function K-BEST Output: Top k components ranked by A model score p 1: procedure RANKCOMPONENTS(x, C, k, A) 2: SCORES ←[ ] ▷Initialize score list 3: for each component c ∈C do 4: p ←ALIGNA(x, c) ▷Score using A 5: SCORES += (c, p) ▷Add to list 6: return K-Best(SCORES,k) ▷k best components both input strings. This function, and the definition of l(j), assumes different forms according to the particular alignment model being used. We consider three different types of alignment models each defined in the following way: pd(l(j)|...) =    1 J+1 (1) a(j|i, I, J) (2) a(t(j)|i, I, tlen(J)) (3) Models (1-2) are the classic IBM word-alignment models of Brown et al. (1993). IBM Model 1, for example, assumes a uniform distribution over all positions, and is the main model investigated in DC. For comparison, we also experiment with IBM Model 2, where each l(j) refers to the string position of j in the component input, and a(..) defines a multinomial distribution such that ∑J j=0 a(j|i, I, J) = 1.0. We also define a new tree based alignment model (3) that takes into account the syntax associated with the function representations. Each l(j) is the relative tree position of the alignment point, shown as t(j), and tlen(J) is the length of the tree associated with z. This approach assumes a tree representation for each z. We generated these trees heuristically by preserving the information that is lost when components are converted to a linear sequence representation. An example structure for PHP is shown in Figure 4, where the red solid line indicates the types of potential errors avoided by this model. Learning is done by applying the standard EM training procedure of Brown et al. (1993). 3.3 Ranking and Decoding Algorithm 1 shows how to rank API components. For a text input x, we iterate through all known API components C and assign a score using a model A. We then rank the components by their scores using a K-BEST function. This method serves as a type of word-based decoding algorithm 1616 bool ZipArchive::deleteName(string $name) bool3 bool string $name2 name string deleteName1 name delete ZipArchive0 ZipArchive Delete entry in an archive using its name X012 → ⟨ X 01 X 2 , X 01 X 2 bool ⟩ X01 → ⟨ X 1 in an X 0 , X 0 X 1 ⟩ X1 → ⟨ Delete X 1 , delete X 1 ⟩ X1 → ⟨ entry, name ⟩ X0 → ⟨ archive, ZipArchive ⟩ X2 → ⟨ using its X 2 , X 2 ⟩ X2 → ⟨ name, string $name ⟩ Figure 4: An example tree structure (above) associated with an input component. Below are Hiero rules (Chiang, 2007) extracted from the alignment and tree information. which is simplified by the finite nature of the target language. The complexity of the scoring procedure, lines 3-5, is linear over the number components m in C. In practice, we implement the K-BEST sorting function on line 6 as a binary insertion sort on line 5, resulting in an overall complexity of O(m log m). While iterating over m API components might not be feasible given more complicated formal languages with recursion, a more clever decoding algorithm could be applied, e.g., one based on the lattice decoding approach of (Dyer et al., 2008). Since we are interested in providing initial baseline results, we leave this for future work. 4 Discriminative Approach In this section, we introduce a new model that aims to improve on the previous baseline methods. While the previous models are restricted to word-level information, we extend this approach by using a discriminative reranking model that captures phrase information to see if this leads to an improvement. This model can also capture document-level information from the APIs, such as the additional textual descriptions of parameters, see also declarations or classes of related functions and syntax information. 4.1 Modeling Like in most semantic parsing approaches (Zettlemoyer and Collins, 2009; Liang et al., 2011), our model is defined as a conditional log-linear z: function float cosh float $arg x: Returns the hyperbolic cosine of arg c4 ={ cosh ,acosh,sinh.} ’the arg of..’ ϕ(x,z) = Model score: is it in top 5..10? Pairs/Alignments: (hyperbolic, cosh) = 1, (cosine, cosh) = 1, ... Phrases: (hyperbolic cosine, cosh) = 1, (of arg, float $arg) = ... See also: (hyperbolic, c4 = {cos,..}) = 1, (arg, c4) = 1, ... In Descr.: (arg, , $arg) = 1, (arg , float) = 0, ... Trees/Matches (hyperbolic, cosh, NAME NODE) = 1, number of matches= ... Figure 5: Example features used by our rerankers. model over components z ∈C with parameters θ ∈Rb, and a set of feature functions ϕ(x, z): p( z| x; θ) ∝eθ·ϕ(x,z). Formally, our training objective is to maximize the conditional log-likehood of the correct component output z for each input x: O(θ) = ∑n i=1 log p (zi | xi; θ). 4.2 Features Our model uses word-level features, such as word match, word pairs, as well as information from the underlying aligner model such as Viterbi alignment information and model score. Two additional categories of non-word features are described below. An illustration of the feature extraction procedure is shown in Figure 5 1. Phrases Features We extract phrase features (e.g., (hyper. cosine, cosh) in Figure 5) from example text component pairs by training symmetric word aligners and applying standard word-level heuristics (Koehn et al., 2003). Additional features, such as phrase match/overlap, tree positions of phrases, are defined over the extracted phrases. We also extract hierarchical phrases (Chiang, 2007) using a variant of the SAMT method of Zollmann and Venugopal (2006) and the component syntax trees. Example rules are shown in Figure 4, where gaps (i.e., symbols in square brackets) are filled with smaller phrase-tree alignments. Document Level Features Document features are of two categories. The first includes additional textual descriptions of parameters, return values, and modules. One class of features is whether certain words under consideration appear in the @param and @return descriptions of the target components. For example, the arg token in 1A more complete description of features is included as supplementary material, along with all source code. 1617 Algorithm 2 Online Rank Learner Input: Dataset D, components C, iterations T, rank k, learning rate α, model A, ranker function RANK Output: Weight vector θ 1: procedure LEARNRERANKER(D, C, T, k, α, A, RANK) 2: θ ←0 ▷Initialize 3: for t ∈1..T do 4: for pairs (xi, zi) ∈D do 5: S = RANK(xi, C, k, A) ▷Scored candidates 6: ∆= ϕ(xi, zi) −Es∈S∼p(s|xi;θ)[ϕ(xi, s)] 7: θ = θ + α∆ ▷Update online 8: return θ Figure 5 appears in the textual description of the $arg parameter elsewhere in the documentation string. Other features relate to general information about abstract symbol categories, as specified in see-also assertions, or hyper-link pointers. By exploiting this information, we extract general classes of functions, for example the set of hyperbolic function (e.g., sinh, cosh, shown as c4 in Figure 5), and associate these classes with words and phrases (e.g., hyperbolic and hyperbolic cosine). 4.3 Learning To optimize our objective, we use Algorithm 2. We estimate the model parameters θ using a Kbest approximation of the standard stochastic gradient updates (lines 6-7), and a ranker function RANK. We note that while we use the ranker described in Algorithm 1, any suitable ranker or decoding method could be used here. 5 Experimental Setup 5.1 Datasets Source code documentation Our source code documentation collection consists of the standard library for nine programming languages, which are listed in Figure 3. We also use the translated version of the PHP collection for six additional languages, the details of which are shown in Figure 6. The Java dataset was first used in DC, while we extracted all other datasets for this work. The size of the different datasets are detailed in both figures. The number of pairs is the number of single sentences paired with function representations, which constitutes the core part of these datasets. The number of descriptions is the number of additional textual descriptions provided in the overall document, such as descriptions of parameters or return values. Dataset # Pairs #Descr. Symbols Words Vocab. PHPfr 6,155 14,058 7,922 70,800 5,904 PHPes 5,823 13,285 7,571 69,882 5,790 PHPja 4,903 11,251 6,399 65,565 3,743 PHPru 2,549 6,030 3,340 23,105 4,599 PHPtr 1,822 4,414 2,725 16,033 3,553 PHPde 1,538 3,733 2,417 17,460 3,209 Figure 6: The non-English PHP datasets. We also quantify the different datasets in terms of unique symbols in the target representations, shown as Symbols. All function representations and code sequences are linearized, and in some cases further tokenized, for example, by converting out of camel case or removing underscores. Man pages The collection of man pages is from Richardson and Kuhn (2014) and includes 921 text-code pairs that span 330 Unix utilities and man pages. Using information from the synopsis and parameter declarations, the target code representations are abstracted by type. The extra descriptions are extracted from parameter descriptions, as shown in the DESCRIPTION section in Figure 1, as well as from the NAME sections of each manual. 5.2 Evaluation For evaluation, we split our datasets into separate training, validation and test sets. For Java, we reserve 60% of the data for training and the remaining 40% for validation (20%) and testing (20%). For all other datasets, we use a 70%-30% split. From a retrieval perspective, these left out descriptions are meant to mimic unseen queries to our model. After training our models, we evaluate on these held out sets by ranking all known components in each resource using Algorithm 1. A predicted component is counted as correct if it matches exactly a gold component. Following DC, we report the accuracy of predicting the correct representation at the first position in the ranked list (Accuracy @1) and within the top 10 positions (Accuracy @10). We also report the mean reciprocal rank MRR, or the multiplicative inverse of the rank of the correct answer. Baselines For comparison, we trained a bag-ofwords classifier (the BoW Model in Table 1). This model uses the occurrence of word-component symbol pairs as binary features, and aims to see if word co-occurrence alone is sufficient to for ranking representations. 1618 Method Java PHPen Python Haskell Clojure Ruby Elisp C BOW Model 16.4 63.8 31.8 08.0 40.5 18.1 04.1 33.3 13.6 05.6 55.6 21.7 03.0 49.2 16.4 07.0 38.0 16.9 09.9 54.6 23.5 08.8 48.8 20.0 Term Match 15.7 41.3 24.8 15.6 37.0 23.1 16.6 41.7 24.8 15.4 41.8 24.0 20.7 49.2 30.0 23.1 46.9 31.2 29.3 65.4 41.4 13.1 37.5 21.9 IBM M1 34.3 79.8 50.2 35.5 70.5 47.2 22.7 61.0 35.8 22.3 70.3 39.6 29.6 69.2 41.6 31.4 68.5 44.2 30.6 67.4 43.5 21.8 63.7 34.4 IBM M2 30.3 77.2 46.5 33.2 67.7 45.0 21.4 58.0 34.4 13.8 68.2 31.8 26.5 64.2 38.2 27.9 66.0 41.4 28.1 66.1 40.7 23.7 60.9 34.6 Tree Model 29.3 75.4 45.3 28.0 63.2 39.8 17.5 55.4 30.7 17.8 65.4 35.2 23.0 60.3 34.4 27.1 63.3 39.5 26.8 63.2 39.7 18.1 56.2 29.4 M1 Descr. 33.3 77.0 48.7 34.1 71.1 47.2 22.7 62.3 35.9 23.9 69.5 40.2 29.6 69.2 41.6 32.5 70.0 45.5 30.3 73.4 44.7 21.8 62.7 33.9 Reranker 35.3 81.5 51.4 36.9 74.2 49.3 25.5 66.0 38.7 24.7 73.9 43.0 35.0 76.9 47.9 35.1 72.5 48.0 37.6 80.5 53.3 29.7 67.4 40.1 Method Scheme PHPfr PHPes PHPja PHPru PHPtr PHPde Unix BOW Model 06.1 58.1 21.4 06.1 36.9 16.0 05.9 37.8 15.8 04.7 33.2 13.8 04.4 43.6 16.6 05.4 43.4 17.6 04.3 39.2 15.3 08.6 49.6 21.0 Term Match 25.5 61.2 37.4 04.0 15.8 07.7 02.9 10.4 05.4 02.3 11.2 05.2 01.0 09.3 03.6 01.4 08.7 03.6 03.8 09.4 06.2 15.1 33.8 22.4 IBM M1 32.1 75.5 46.2 32.1 65.1 43.5 29.5 63.7 41.2 23.0 58.1 34.9 20.3 58.4 33.3 25.9 61.6 38.6 22.8 62.5 36.8 30.2 66.9 42.2 IBM M2 29.5 71.4 43.9 30.6 62.2 41.2 26.7 59.8 38.3 22.2 56.1 33.3 18.5 54.5 30.6 23.3 57.6 35.8 19.8 58.6 33.0 23.0 60.4 36.0 Tree Model 26.1 71.2 40.3 27.9 59.3 38.6 25.9 61.0 37.6 22.6 57.8 34.1 20.6 59.0 32.9 18,9 55.1 32.0 18.5 56.0 30.6 23.0 58.2 34.3 M1 Descr. 33.1 75.5 47.1 31.0 64.8 42.7 28.6 64.9 41.1 25.4 60.4 37.0 21.1 62.6 34.5 29.1 62.0 41.4 26.7 62.0 38.8 34.5 71.9 47.4 Reranker 34.6 77.5 48.9 32.7 66.8 44.2 30.6 66.3 42.6 25.8 61.8 37.8 21.1 66.8 35.9 29.9 63.8 41.2 28.0 65.9 40.5 34.5 74.8 48.5 Accuracy @1 Accuracy @10 Mean Reciprocal Rank (MRR) Table 1: Test results according to the table below. Since our discriminative models use more data than the baseline models, which therefore make the results not directly comparable, we train a more comparable translation model, shown as M1 Descr. in Table 1, by adding the additional textual data (i.e. parameter and return or module descriptions) to the models’ parallel training data. 6 Results and Discussion Test results are shown in Table 1. Among the baseline models, IBM Model 1 outperforms virtually all other models and is in general a strong baseline. Of particular note is the poor performance of the higher-order translation models based on Model 2 and the Tree Model. While Model 2 is known to outperform Model 1 on more conventional translation tasks (Och and Ney, 2003), it appears that such improvements are not reflected in this type of semantic translation context. The bag-of-words (BoW) and Term Match baselines are outperformed by all other models. This shows that translation in this context is more complicated than simple word matching. In some cases the term matching baseline is competitive with other models, suggesting that API collections differ in how language descriptions overlap with component names and naming conventions. It is clear, however, that this heuristic only works for English, as shown by results on the non-English PHP datasets in Table 1. We achieve improvements on many datasets by adding additional data to the translation model (M1 Descr.). We achieve further improvements on all datasets using the discriminative model (Reranker), with most increases in performance occurring at how the top ten items are ranked. This last result suggests that phrase-level and document-level features can help to improve the overall ranking and translation, though in some cases the improvement is rather modest. Despite the simplicity of our semantic parsing model and decoder, there is still much room for improvement, especially on achieving better Accuracy @1. While one might expect better results when moving from a word-based model to a model that exploits phrase and hierarchical phrase features, the sparsity of the component vocabulary is such that most phrase patterns in the training are not observed in the evaluation. In many benchmark semantic parsing datasets, such sparsity issues do not occur (Cimiano and Minock, 2009), suggesting that state-of-the-art methods will have similar problems when applied to our datasets. Recent approaches to open-domain semantic parsing have dealt with this problem by using paraphrasing techniques (Berant and Liang, 2014) or distant supervision (Reddy et al., 2014). We expect that these methods can be used to improve our models and results, especially given the wide availability of technical documentation, for example, distributed within the Opus project (Tiedemann, 2012). Model Errors We performed analysis on some of the incorrect predictions made by our models. For some documentation sets, such as those in the GNU documentation collection2, information is organized into a small and concrete set of categories/chapters, each corresponding to various features or modules in the language and related functions. Given this information, Figure 2https://www.gnu.org/doc/doc.en.html 1619 Associations Datatypes Environments Procedures OS IO Graphics Errors Windows Other Scheme Equiv. Spec. Forms Characters Numbers Lists Strings Bit Strings Vectors Vectors Bit Strings Strings Lists Numbers Characters Spec. Forms Equiv. Scheme Other Windows Errors Graphics IO OS Procedures Environments Datatypes Associations dash sel Files Backups Buffers Windows CommandLoop Keymaps Modes Documentation Frames Positions Strings Datatypes Characters Numbers Hash Tables Sequences Evaluation Symbols OS Garbage Coll. Distr. Functions Loading Customization Debug Minibuffers Non-Ascii Text Markers Display Processes Abbrevs Syntax Tables Search/Match Read/Write Read/Write Search/Match Syntax Tables Abbrevs Processes Display Markers Text Non-Ascii Minibuffers Debug Customization Loading Functions Distr. Garbage Coll. OS Symbols Evaluation Sequences Hash Tables Numbers Characters Datatypes Strings Positions Frames Documentation Modes Keymaps CommandLoop Windows Buffers Backups Files sel dash Figure 7: Function predictions by documentation category for Scheme (left) and Elisp (right). 7 shows the confusion between predicting different categories of functions, where the rows show the categories of functions to be predicted and the columns show the different categories predicted. We built these plots by finding the categories of the top 50 non-gold (or erroneous) representations generated for each validation example. The step-like lines through the diagonal of both plots show that alternative predictions (shaded according to occurrence) are often of the same category, most strikingly for the corner categories. This trend seems stable across other datasets, even among datasets with large numbers of categories. Interestingly, many confusions appear to be between related categories. For example, when making predictions about Strings functions in Scheme, the model often generates function related to BitStrings, Characters and IO. Again, this trend seems to hold for other documentation sets, suggesting that the models are often making semantically sensible decisions. Looking at errors in other datasets, one common error involves generating functions with the same name and/or functionality. In large libraries, different modules sometimes implement that same core functions, such the genericpath or posixpath modules in Python. When generating a representation for the text return size of file, our model confuses the getsize(filename) function in one module with others. Similarly, other subtle distinctions that are not explicitly expressed in the text descriptions are not captured, such as the distinction in Haskell between safe and unsafe bit shifting functions. While many of these predictions might be correct, our evaluation fails to take into account these various equivalences, which is an issue that should be investigated in future work. Future work will also look systematically at the effect that types (i.e., in statically typed versus dynamic languages) have on prediction. 7 Future Work We see two possible use cases for this data. First, for benchmarking semantic parsing models on the task of semantic translation. While there has been a trend towards learning executable semantic parsers (Berant et al., 2013; Liang, 2016), there has also been renewed interest in supervised learning of formal representations in the context of neural semantic parsing models (Dong and Lapata, 2016; Jia and Liang, 2016). We believe that good performance on our datasets should lead to better performance on more conventional semantic parsing tasks, and raise new challenges involving sparsity and multilingual learning. We also see these resources as useful for investigations into natural language programming. While our experiments look at learning rudimentary translational correspondences between text and code, a next step might be learning to synthesize executable programs via these translations, along the lines of (Desai et al., 2016; Raza et al., 2015). Other document-level features, such as example input-output pairs, unit tests, might be useful in this endeavor. Acknowledgements This work was funded by the Deutsche Forschungsgemeinschaft (DFG) via SFB 732, project D2. Thanks also to our IMS colleagues, in particular Christian Scheible, for providing feedback on earlier drafts, as well as to Jonathan Berant for helpful discussions. 1620 References Miltiadis Allamanis, Daniel Tarlow, Andrew D Gordon, and Yi Wei. 2015. Bimodal modelling of source code and natural language. In Proceedings of the 32th International Conference on Machine Learning. volume 951, page 2015. Yoav Artzi, Kenton Lee, and Luke Zettlemoyer. 2015. Broad-coverage CCG semantic parsing with AMR. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. pages 1699–1710. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In in Proceedings of EMNLP-2013. pages 1533–1544. Jonathan Berant and Percy Liang. 2014. Semantic parsing via paraphrasing. In Proceedings of ACL2014. pages 1415–1425. Benjamin B¨orschinger, Bevan K. Jones, and Mark Johnson. 2011. Reducing grounded learning tasks to grammatical inference. In Proceedings of EMNLP2011. pages 1416–1425. Peter F Brown, Vincent J Della Pietra, Stephen A Della Pietra, and Robert L Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational linguistics 19(2):263–311. David L. Chen and Raymond J. Mooney. 2008. Learning to sportscast: A test of grounded language acquisition. In Proceedings of ICML-2008. pages 128– 135. David Chiang. 2007. Hierarchical phrase-based translation. computational linguistics 33(2):201–228. Philipp Cimiano and Michael Minock. 2009. Natural language interfaces: what is the problem?–a datadriven quantitative analysis. In International Conference on Application of Natural Language to Information Systems. Springer, pages 192–206. Huijing Deng and Grzegorz Chrupała. 2014. Semantic approaches to software component retrieval with English queries. In Proceedings of LREC-14. pages 441–450. Aditya Desai, Sumit Gulwani, Vineet Hingorani, Nidhi Jain, Amey Karkare, Mark Marron, Subhajit Roy, et al. 2016. Program synthesis using natural language. In Proceedings of the 38th International Conference on Software Engineering. ACM, pages 345–356. Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. arXiv preprint arXiv:1601.01280 . Christopher Dyer, Smaranda Muresan, and Philip Resnik. 2008. Generalizing word lattice translation. Proceedings of ACL-08 page 1012. Xiaodong Gu, Hongyu Zhang, Dongmei Zhang, and Sunghun Kim. 2016. Deep API Learning. arXiv preprint arXiv:1605.08535 . Srinivasan Iyer, Ioannis Kostas, Alvin Cheung, and Luke Zettlemoyer. 2016. Summarizing source code using a neural attention model. Proceedings of ACL2016 . Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. arXiv preprint arXiv:1606.03622 . Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the NACL-2003. pages 48–54. Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. 2014. Learning to automatically solve algebra word problems. In Proceedings of ACL-2014. pages 271–281. Nate Kushman and Regina Barzilay. 2013. Using semantic unification to generate regular expressions from natural language. In Proceedings of NAACL2013. Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2010. Inducing probabilistic CCG grammars from logical form with higherorder unification. In Proceedings of EMNLP-2010. pages 1223–1233. P. Liang, M. I. Jordan, and D. Klein. 2011. Learning dependency-based compositional semantics. In Proceedings of ACL-11. pages 590–599. Percy Liang. 2016. Learning executable semantic parsers for natural language understanding. Communications of the ACM 59(9):68–76. Fei Lv, Hongyu Zhang, Jian-guang Lou, Shaowei Wang, Dongmei Zhang, and Jianjun Zhao. 2015. Codehow: Effective code search based on api understanding and extended boolean model (e). In Automated Software Engineering (ASE), 2015 30th IEEE/ACM International Conference on. IEEE, pages 260–270. Mehdi Hafezi Manshadi, Daniel Gildea, and James F Allen. 2013. Integrating programming by example and natural language programming. In Proceedings of AAAI-2013. Cynthia Matuszek, Evan Herbst, Luke Zettlemoyer, and Dieter Fox. 2012. Learning to parse natural language commands to a robot control system. In Proceedings of the International Symposium on Experimental Robotics (ISER). 1621 Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational linguistics 29(1):19–51. Yusuke Oda, Hiroyuki Fudaba, Graham Neubig, Hideaki Hata, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2015. Learning to generate pseudo-code from source code using statistical machine translation (t). In Automated Software Engineering (ASE), 2015 30th IEEE/ACM International Conference on. IEEE, pages 574–584. Panupong Pasupat and Percy Liang. 2015. Compositional semantic parsing on semi-structured tables. In Proceedings of ACL-2015. Xiaochang Peng, Linfeng Song, and Daniel Gildea. 2015. A Synchronous Hyperedge Replacement Grammar based approach for AMR parsing. Proceedings of CoNLL-2015 page 32. Chris Quirk, Raymond J Mooney, and Michel Galley. 2015. Language to code: Learning semantic parsers for if-this-then-that recipes. In Proceedings of ACL2015. pages 878–888. Mohammad Raza, Sumit Gulwani, and Natasa MilicFrayling. 2015. Compositional program synthesis from natural language and examples. In IJCAI. pages 792–800. Siva Reddy, Mirella Lapata, and Mark Steedman. 2014. Large-scale semantic parsing without questionanswer pairs. Transactions of the Association for Computational Linguistics 2:377–392. Kyle Richardson and Jonas Kuhn. 2014. UnixMan corpus: A resource for language learning in the Unix domain. In Proceedings of LREC-2014. Kyle Richardson and Jonas Kuhn. 2016. Learning to make inferences in a semantic parsing task. Transactions of the Association for Computational Linguistics 4:155–168. F. Song and W.B Croft. 1999. A general language model for information retrieval. In in Proceedings International Conference on Information and Knowledge Management. J¨org Tiedemann. 2012. Parallel data, tools and interfaces in opus. In LREC. volume 2012, pages 2214– 2218. Yuk Wah Wong and Raymond J. Mooney. 2006. Learning for semantic parsing with statistical machine translation. In Proceedings of HLT-NAACL-2006. pages 439–446. Yuk Wah Wong and Raymond J Mooney. 2007a. Generation by inverting a semantic parser that uses statistical machine translation. In Proceedings of HLTNAACL-2007. pages 172–179. Yuk Wah Wong and Raymond J. Mooney. 2007b. Learning synchronous grammars for semantic parsing with lambda calculus. In Proceedings of ACL2007. Prague, Czech Republic. John M Zelle and Raymond J Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proceedings of AAAI-1996. pages 1050–1055. Luke S. Zettlemoyer and Michael Collins. 2009. Learning context-dependent mappings from sentences to logical form. In Proceedings of ACL-2009. pages 976–984. Andreas Zollmann and Ashish Venugopal. 2006. Syntax augmented machine translation via chart parsing. In Proceedings of the Workshop on Statistical Machine Translation. pages 138–141. 1622
2017
148
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1623–1633 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1149 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1623–1633 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1149 Bridging Text and Knowledge by Learning Multi-Prototype Entity Mention Embedding Yixin Cao1, Lifu Huang2, Heng Ji2, Xu Chen1, Juanzi Li1∗ 1 Tsinghua National Laboratory for Information Science and Technology Dept. of Computer Science and Technology, Tsinghua University, China 100084 {caoyixin2011,successcx,lijuanzi2008}@gmail.com 2 Dept. of Computer Science, Rensselaer Polytechnic Institute, USA 12180 {huangl7,jih}@rpi.edu Abstract Integrating text and knowledge into a unified semantic space has attracted significant research interests recently. However, the ambiguity in the common space remains a challenge, namely that the same mention phrase usually refers to various entities. In this paper, to deal with the ambiguity of entity mentions, we propose a novel Multi-Prototype Mention Embedding model, which learns multiple sense embeddings for each mention by jointly modeling words from textual contexts and entities derived from a knowledge base. In addition, we further design an efficient language model based approach to disambiguate each mention to a specific sense. In experiments, both qualitative and quantitative analysis demonstrate the high quality of the word, entity and multi-prototype mention embeddings. Using entity linking as a study case, we apply our disambiguation method as well as the multi-prototype mention embeddings on the benchmark dataset, and achieve state-of-the-art performance. 1 Introduction Jointly learning text and knowledge representations in a unified vector space greatly benefits many Natural Language Processing (NLP) tasks, such as knowledge graph completion (Han et al., 2016; Wang and Li, 2016), relation extraction (Weston et al., 2013), word sense disambiguation (Mancini et al., 2016), entity classification (Huang et al., 2017) and linking (Huang et al., 2015). Existing work can be roughly divided into two categories. One is encoding words and entities into a unified vector space using Deep Neural ∗Corresponding author. Networks (DNN). These methods suffer from the problems of expensive training and great limitations on the size of word and entity vocabulary (Han et al., 2016; Toutanova et al., 2015; Wu et al., 2016). The other is to learn word and entity embeddings separately, and then align similar words and entities into a common space with the help of Wikipedia hyperlinks, so that they share similar representations (Wang et al., 2014; Yamada et al., 2016). m1 m1 m2 … action film ''Independence Day'', the United States military uses alien technology… … holds annual Independence Day celebrations and other festivals … … bands played it during public events, such as July 4th celebrations. Text d1 d2 d3 Independence Day (film) Independence Day (US) Entity e2 e1 Knowledge Base Independence Day July 4th Mention m2 m1 Figure 1: Examples. However, there are two major problems arising from directly integrating word and entity embeddings into a unified semantic space. First, mention phrases are highly ambiguous and can refer to multiple entities in the common space. As shown in Figure 1, the same mention independence day (m1) can either refer to a holiday: Independence Day (US) or a film: Independence Day (film). Second, an entity often has various aliases when mentioned in various contexts, which implies a much larger size of mention vocabulary compared with entities. For example, in Figure 1, the documents d2 and d3 describes the same entity Independence Day (US) (e2) with distinct mentions: independence day and July 4th. We observe tens of millions of mentions referring to 5 millions of entities in Wikipedia. To address these issues, we propose to learn multiple embeddings for mentions inspired by the Word Sense Disambiguation (WSD) task (Reisinger and Mooney, 2010; Huang et al., 2012; 1623 Tian et al., 2014; Neelakantan et al., 2014; Li and Jurafsky, 2015). The basic idea behind it is to consider entities in KBs that can provide a meaning repository of mentions (i.e. words or phrases) in texts. That is, each mention has one or multiple meanings, namely mention senses, and each sense corresponds to an entity. Furthermore, we assume that different mentions referring to the same entity express the same meaning and share a common mention sense embedding, which largely reduces the size of mention vocabulary to be learned. For example, the mentions Independence Day in d2 and July 4th in d3 have a common mention sense embedding during training since they refer to the same holiday. Thus, text and knowledge are bridged via mention sense. In this paper, we propose a novel MultiPrototype Mention Embedding (MPME) model, which jointly learns the representations of words, entities, and mentions at sense level. Different mention senses are distinguished by taking advantage of both textual context information and knowledge of reference entities. Following the frameworks in (Wang et al., 2014; Yamada et al., 2016), we use separate models to learn the representations for words, entities and mentions, and further align them by a unified optimization objective. Extending from skip-gram model and CBOW model, our model can be trained efficiently (Mikolov et al., 2013a,b) from a large scale corpus. In addition, we also design a language model based approach to determine the sense for each mention in a document based on multi-prototype mention embeddings. For evaluation, we first provide qualitative analysis to verify the effectiveness of MPME to bridge text and knowledge representations at the sense level. Then, separate tasks for words and entities show improvements by using our word, entity and mention representations. Finally, using entity linking as a case study, experimental results on the benchmark dataset demonstrate the effectiveness of our embedding model as well as the disambiguation method. 2 Preliminaries In this section, we formally define the input and output of multi-prototype mention embedding. A knowledge base KB contains a set of entities E = {ej}, and their relations. We use Wikipedia as the given knowledge base, and organize it as a directed knowledge network: nodes denote entities, and edges are outlinks from Wikipedia pages. In the directed network, we define the entities that point to ej as its neighbors N(ej), but ignore those entities that ej points to, so that the repeated computations on the same edge would be avoided if edges were undirected. A text corpus D is a set of sequential words D = {w1, · · · , wi, · · · , w|D|}, where wi is the ith word and |D| is the length of the word sequence. Since an entity mention ml may consist of multiple words, we define an annotated text corpus1 as D′ = {x1, · · · , xi, · · · , x|D′|}, where xi corresponds to either a word wi or a mention ml. We define the words around xi within a predefined window as its context words C(xi). An Anchor is a Wikipedia hyperlink from a mention ml linking to its entity ej, and is represented as a pair < mh, ej >∈A. The anchors provide mention boundaries as well as their reference entities from Wikipedia articles. These Wikipedia articles are used as an annotated text corpus D′ in this paper. Multi-Prototype Mention Embedding . Given a KB, an annotated text corpus D′ and a set of anchors A, we aim to learn multi-prototype mention embedding, namely multiple sense embeddings sjl ∈Rk for each mention ml as well as word embeddings w and entity embeddings e. We use M∗ l = {sl j} to denote the sense set of mention ml, where each sl j refers to an entity ej. Thus, the vocabulary size is reduced to a fixed number |{s∗ j}| = |E|. We use s∗ j to denote the shared sense of mentions referring to entity ej. Example As shown in Figure 1, Independence Day (m1) has two mention senses s1 1, s1 2, and July 4th (m2) has one mention sense s2 2. Based on the assumption in Section 1, we have s∗ 2 = s1 2 = s2 2 referring to entity Independence Day (US) (e2). 3 An Overview of Our Method Given a knowledge base KB, an annotated text corpus D′ and a set of anchors A, we aim to jointly learn word, entity and mention sense representations: w, e, s. As shown in Figure 2, our framework contains two key components: 1Generally, the mention boundary can be obtained by using NER tools like Standford NER (Finkel et al., 2005). In this paper, we use Wikipedia anchors as annotations of Wikipedia text corpus for the concentration of our main purpose. 1624 Representation Learning Entity Representation Learning Text Representation Learning bands played it during public events, such as [[Independence Day (US)|July 4th]] celebrations … In the 1996 action film [[Independence Day (film)|Independence Day]], the United States military uses alien technology captured … 4 336 337 338 339 340 341 342 343 344 345 346 347 348 349 3.3 Knowledge model K B X NX P(eneighbor|ei) 3.4 Joint model A X P(ej|wm t , si) + P(ej|wcontext) eIndependence Day (film) eIndependence Day (US) wm1 Independence Day tations in Tomas Miko Corrado, a resentatio sitionality tou, Zoub editors, Ad Systems 2 formation a meeting Nevada, U Kristina Tou Choudhur ing text fo bases. AC tics. 4 336 337 338 339 340 341 342 343 344 345 346 347 348 349 3.3 Knowledge model K B X NX P(eneighbor|ei) 3.4 Joint model A X P(ej|wm t , si) + P(ej|wcontext) eIndependence Day (film) eIndependence Day (US) wm1 Independence Day Tomas M Corra resent sition tou, Z editor System forma a mee Nevad Kristina Choud ing te bases tics. Anchor Text 4 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 y g y edge representation model, and enjoys their advantages of different aspects in knowledge bases2. This is reasonable because we output two separately semantic vector spaces for text and knowledge respectively, while we can still obtain the relatedness between word and entity indirectly by computing similarity between word and mention embeddings referring to that entity. 3 Method In this section, we present three main components in MPME: text model, knowledge model and joint model, and then introduce the detailed information on training process. Finally, we briefly introduce the framework for entity linking. 3.1 Skip-gram model capable of iterative learning; capable of learning more mention names; capable of tuning mention sense via text model; capable of NIL sense; 1. take pre-trained word and entity embeddings as input; 2. collect mention name to entity title mapping; use anchor to annotate each mention. each mention corresponds multiple sense; each sense relates 2Thus, MPME only trains text model and joint model. 3.4 Joint model A X P(ej|wm t , si) + P(ej|wcontext) eIndependence Day (film) eIndependence Day (US) wm1 Independence Day wm2 Independence Day wfilm wcelebrations wm M emorial Day 4 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 dings. Actually, MPME is flexible to utilize pretrained entity embeddings from arbitrary knowledge representation model, and enjoys their advantages of different aspects in knowledge bases2. This is reasonable because we output two separately semantic vector spaces for text and knowledge respectively, while we can still obtain the relatedness between word and entity indirectly by computing similarity between word and mention embeddings referring to that entity. 3 Method In this section, we present three main components in MPME: text model, knowledge model and joint model, and then introduce the detailed information on training process. Finally, we briefly introduce the framework for entity linking. 3.1 Skip-gram model capable of iterative learning; capable of learning more mention names; capable of tuning mention sense via text model; capable of NIL sense; 1. take pre-trained word and entity embeddings as input; 2. collect mention name to entity title mapping; use anchor to annotate each mention. each mention corresponds multiple sense; each sense relates 2Thus, MPME only trains text model and joint model. X X P(eneighbor|ei) 3.4 Joint model A X P(ej|wm t , si) + P(ej|wcontext) eIndependence Day (film) eIndependence Day (US) wm1 Independence Day wm2 Independence Day wfilm wcelebrations wm M emorial Day 5 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 ACL 2016 Submission ***. Confidential review copy. DO NOT DISTRIBUTE. 3.2 Skip-gram model g(Independence Day, ) P(N(ej)|ej) P(ej|C(mh), ts l ) P(C(wi)|wi)P(C(mh)|ts l , mh) 3.3 Text model Lw = T X t= 1 log P(wt+ j|wm t , si)P(si|wcontext) + T X t= 1 X −cjc,j6= 0 log P(wt+ j|wt) (1) D X CX P(wt+ j|wm t , si)P(si|wm t , wcontext) 3.4 Knowledge model K B X NX P(eneighbor|ei) 3.5 Joint model A X P(ej|wm t , si) + P(ej|wcontext) 3.6 Training 3.7 Integrating into GBDT for EL 4 Experiment 4.1 Data Preparation 4.2 Baseline Methods 1. directly align words with entity. 2. align mention with entity using single prototype model. 4.3 Parameter Setting 4.4 Qualitative Analysis 4.5 Entity Relatedness 4.6 Word Analogy 4.7 EL evaluation 5 Related Work 6 Conclusion References Antoine Bordes, Nicolas Usunier, Alberto Garc´ıaDur´an, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Burges et al. (Burges et al., 2013), pages 2787–2795. Christopher J. C. Burges, L´eon Bottou, Zoubin mani, and Kilian Q . Weinberger, editors. 2 vances in Neural Information Processing 26: 27th Annual Conference on Neural Inf Processing Systems 2013. Proceedings of ing held December 5-8, 2013, Lake Tahoe, United States. Xu Han, Zhiyuan Liu, and Maosong Sun Joint representation learning of text and edge for knowledge graph completion. abs/1611.04125. Hongzhao Huang, Larry Heck, and Heng J Leveraging deep neural networks and edge graphs for entity disambiguation. abs/1504.07678. Massimiliano Mancini, Jos´e Camacho-Collad cio Iacobacci, and Roberto Navigli. 2016. ding words and senses together via joint kn enhanced training. CoRR, abs/1612.02703 Tomas Mikolov, Kai Chen, Greg Corrado, an Dean. 2013a. Efficient estimation of word tations in vector space. CoRR, abs/1301.37 Tomas Mikolov, Ilya Sutskever, Kai Chen, G Corrado, and Jeffrey Dean. 2013b. Distrib resentations of words and phrases and thei sitionality. In Burges et al. (Burges et al pages 3111–3119. Kristina Toutanova, Danqi Chen, Patrick Pante Choudhury, and Michael Gamon. 2015. R ing text for joint embedding of text and kn bases. ACL Association for Computational tics. Zhigang Wang and Juan-Zi Li. 2016. Textrepresentation learning for knowledge gr Subbarao Kambhampati, editor, Proceedin Twenty-Fifth International Joint Conferenc ficial Intelligence, IJCAI 2016, New Y ork, 9-15 July 2016, pages 1293–1299. IJCA Press. Zhen Wang, Jianwen Zhang, Jianlin Feng, an Chen. 2014. Knowledge graph and text jo bedding. In Alessandro Moschitti, Bo P Walter Daelemans, editors, Proceedings of Conference on Empirical Methods in Natu guage Processing, EMNLP 2014, Octobe 2014, Doha, Q atar, A meeting of SIGDAT, Interest Group of the ACL, pages 1591–160 Jason Weston, Antoine Bordes, Oksana Ya and Nicolas Usunier. 2013. Connecting and knowledge bases with embedding mode lation extraction. In Proceedings of the 20 ference on Empirical Methods in Natural L Processing, EMNLP 2013, 18-21 Octob Grand Hyatt Seattle, Seattle, W ashington meeting of SIGDAT, a Special Interest Gro ACL, pages 1366–1371. ACL. Independence
 Day (US) United
 States Fireworks Independence
 Day (film) Memorial
 Day Celebrations Observed by Public holidays in the United States category Will
 Smith starring Philadelphia born country inlink outlink inlink Knowledge Base 5 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 ACL 2016 Submission ***. Confidential review copy. DO NOT DISTRIBUTE. 3.2 Skip-gram model g(Independence Day, ) P(N(ej)|ej) P(ej|C(mh), ts l ) e1 e2 P(C(wi)|wi) · P(C(mh)|ts l , mh) (1) t1 Independence Day t2 Independence Day t1 M emorial Day g(Independence Day, Independence Day (U S )) (2) g(Independence Day) g(July 4th) (3) 3.3 Text model Lw = T X t= 1 log P(wt+ j|wm t , si)P(si|wcontext) + T X t= 1 X −cjc,j6= 0 log P(wt+ j|wt) (4) D X CX P(wt+ j|wm t , si)P(si|wm t , wcontext) 3.4 Knowledge model K B X NX P(eneighbor|ei) 3.5 Joint model A X P(ej|wm t , si) + P(ej|wcontext) 3.6 Training 3.7 Integrating into GBDT for EL 4 Experiment 4.1 Data Preparation 4.2 Baseline Methods 1. directly align words with entity. 2. align mention with entity using single prototype model. 4.3 Parameter Setting 4.4 Qualitative Analysis 4.5 Entity Relatedness 4.6 Word Analogy 4.7 EL evaluation 5 Related Work 6 Conclusion References Antoine Bordes, Nicolas Usunier, Alberto Garc´ıaDur´an, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Burges et al. (Burges et al., 2013), pages 2787–2795. Christopher J. C. Burges, L´eon Bottou, Zoubin Ghahramani, and Kilian Q . Weinberger, editors. 2013. Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States. Xu Han, Zhiyuan Liu, and Maosong Sun. 2016. Joint representation learning of text and knowledge for knowledge graph completion. CoRR, abs/1611.04125. Hongzhao Huang, Larry Heck, and Heng Ji. 2015. Leveraging deep neural networks and knowledge graphs for entity disambiguation. CoRR, abs/1504.07678. Massimiliano Mancini, Jos´e Camacho-Collados, Ignacio Iacobacci, and Roberto Navigli. 2016. Embedding words and senses together via joint knowledgeenhanced training. CoRR, abs/1612.02703. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Burges et al. (Burges et al., 2013), pages 3111–3119. Kristina Toutanova, Danqi Chen, Patrick Pantel, Pallavi Choudhury, and Michael Gamon. 2015. Representing text for joint embedding of text and knowledge bases. ACL Association for Computational Linguistics. Zhigang Wang and Juan-Zi Li. 2016. Text-enhanced representation learning for knowledge graph. In Subbarao Kambhampati, editor, Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016, New Y ork, NY , USA, 9-15 July 2016, pages 1293–1299. IJCAI/AAAI Press. 5 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 ACL 2016 Submission ***. Confidential review copy. DO NOT DISTRIBUTE. 3.2 Skip-gram model g(Independence Day, ) P(N(ej)|ej) P(ej|C(mh), ts l ) e1 e2 P(C(wi)|wi) · P(C(mh)|ts l , mh) (1) t1 Independence Day t2 Independence Day t1 M emorial Day g(Independence Day, Independence Day (U S )) (2) g(Independence Day) g(J uly 4th) (3) 3.3 Text model Lw = T X t= 1 log P(wt+ j|wm t , si)P(si|wcontext) + T X t= 1 X −cjc,j6= 0 log P(wt+ j|wt) (4) D X CX P(wt+ j|wm t , si)P(si|wm t , wcontext) 3.4 Knowledge model K B X NX P(eneighbor|ei) 3.5 Joint model A X P(ej|wm t , si) + P(ej|wcontext) 3.6 Training 3.7 Integrating into GBDT for EL 4 Experiment 4.1 Data Preparation 4.2 Baseline Methods 1. directly align words with entity. 2. align mention with entity using single prototype model. 4.3 Parameter Setting 4.4 Qualitative Analysis 4.5 Entity Relatedness 4.6 Word Analogy 4.7 EL evaluation 5 Related Work 6 Conclusion References Antoine Bordes, Nicolas Usunier, Alberto Garc´ıaDur´an, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Burges et al. (Burges et al., 2013), pages 2787–2795. Christopher J. C. Burges, L´eon Bottou, Zoubin Ghahramani, and Kilian Q . Weinberger, editors. 2013. Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States. Xu Han, Zhiyuan Liu, and Maosong Sun. 2016. Joint representation learning of text and knowledge for knowledge graph completion. CoRR, abs/1611.04125. Hongzhao Huang, Larry Heck, and Heng Ji. 2015. Leveraging deep neural networks and knowledge graphs for entity disambiguation. CoRR, abs/1504.07678. Massimiliano Mancini, Jos´e Camacho-Collados, Ignacio Iacobacci, and Roberto Navigli. 2016. Embedding words and senses together via joint knowledgeenhanced training. CoRR, abs/1612.02703. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Burges et al. (Burges et al., 2013), pages 3111–3119. Kristina Toutanova, Danqi Chen, Patrick Pantel, Pallavi Choudhury, and Michael Gamon. 2015. Representing text for joint embedding of text and knowledge bases. ACL Association for Computational Linguistics. Zhigang Wang and Juan-Zi Li. 2016. Text-enhanced representation learning for knowledge graph. In Subbarao Kambhampati, editor, Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016, New Y ork, NY , USA, 9-15 July 2016, pages 1293–1299. IJCAI/AAAI Press. 5 440 441 442 443 444 445 446 447 448 449 49 49 49 49 49 49 49 49 49 49 embeddings as well as mention sense embeddings by the following objective to maximize the probability of observing the context words given either a word wi or a mention sense of entity title ts l : Lw = X wi,tl2D P(C(wi)|wi) + P(C(tl)|tl, ts l ) (6) wi/ts l , , , w, , ej, e (7) on Software Engineering, 15(9):1066 1077. Christopher J. C. Burges, L´eon Bottou, Zoubin Ghahramani, and Kilian Q . Weinberger, editors. 2013. Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States. Xu Han, Zhiyuan Liu, and Maosong Sun. 2016. Joint representation learning of text and knowledge for knowledge graph completion. CoRR, abs/1611.04125. 5 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 46 46 46 46 47 47 47 47 47 47 47 47 47 47 48 48 48 48 48 48 48 48 48 48 49 49 49 49 49 49 49 49 49 49 tion sense has an embedding (sense vector) tl and a context cluster with center µ(ts l ). The representation of the context is defined as the average of the word vectors in the context: C(wi) = 1 |C(wi)| P wj2C(wi) wj. We predict ts l , the sense of entity title tl in the mention < tl, C(tl) >, when observed with context C(tl) as the context cluster membership. Formally, we have: ts l = ⇢ ts+ 1 l tmax l < λ tmax l otherwise (5) where λ is a hyper-parameter and tmax l = argmaxts l sim(µ(ts l ), C(tl)). We adopt an online non-parametric clustering procedure to learn outof-KB mention senses, which means that if the nearest distance of the context vector to sense cluster center is larger than a threshold, we create a new context cluster and a new sense vector that doesn’t belong to any entity-centric senses. The cluster center is the average of all the context vectors belonging to that cluster. For the similarity metric, we use cosine in our experiments. Here, we extend Skip-gram model to learn word embeddings as well as mention sense embeddings by the following objective to maximize the probability of observing the context words given either a word wi or a mention sense of entity title ts l : Lw = X wi,tl2D P(C(wi)|wi) + P(C(tl)|tl, ts l ) (6) C(·) (7) . Qua tat ve a ys s before conducting the experiments on the tasks, we first give qualitative analysis of words, mentions and entities. firstly, we give the phrase embedding by its nearest words and entities. next, we give quantitative analysis on several tasks. 4.5 Entity Relatedness 4.6 Word Similarity 4.7 EL evaluation 4.7.1 gbdt 4.7.2 unsupervised 5 Related Work 6 Conclusion References Alfred V Aho and Margaret J Corasick. 1975. Efficient string matching: an aid to bibliographic search. Communications of the ACM, 18(6):333–340. J-I Aoe. 1989. An efficient digital search algorithm by using a double-array structure. IEEE Transactions on Software Engineering, 15(9):1066–1077. Christopher J. C. Burges, L´eon Bottou, Zoubin Ghahramani, and Kilian Q . Weinberger, editors. 2013. Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States. Xu Han, Zhiyuan Liu, and Maosong Sun. 2016. Joint representation learning of text and knowledge for knowledge graph completion. CoRR, abs/1611.04125. 5 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 46 46 46 46 46 46 46 47 47 47 47 47 47 47 47 47 47 48 48 48 48 48 48 48 48 48 48 49 49 49 49 49 49 49 49 49 49 (WSD) task, we use the context information to distinguish existing mention senses, or create a new out-of-KB sense. To be concrete, each mention sense has an embedding (sense vector) ts l and a context cluster with center µ(ts l ). The representation of the context is defined as the average of the word vectors in the context: C(wi) = 1 |C(wi)| P wj2C(wi) wj. We predict ts l , the sense of entity title tl in the mention < tl, C(tl) >, when observed with context C(tl) as the context cluster membership. Formally, we have: ts l = ⇢ ts+ 1 l tmax l < λ tmax l otherwise (5) where λ is a hyper-parameter and tmax l = argmaxts l sim(µ(ts l ), C(tl)). We adopt an online non-parametric clustering procedure to learn outof-KB mention senses, which means that if the nearest distance of the context vector to sense cluster center is larger than a threshold, we create a new context cluster and a new sense vector that doesn’t belong to any entity-centric senses. The cluster center is the average of all the context vectors belonging to that cluster. For the similarity metric, we use cosine in our experiments. Here, we extend Skip-gram model to learn word embeddings as well as mention sense embeddings by the following objective to maximize the probability of observing the context words given either a word wi or a mention sense of entity title ts l : Lw = X wi,tl2D P(C(wi)|wi) + P(C(tl)|tl, ts l ) (6) C(·) (7) type model. 4.3 Parameter Setting 4.4 Qualitative Analysis before conducting the experiments on the tasks, we first give qualitative analysis of words, mentions and entities. firstly, we give the phrase embedding by its nearest words and entities. next, we give quantitative analysis on several tasks. 4.5 Entity Relatedness 4.6 Word Similarity 4.7 EL evaluation 4.7.1 gbdt 4.7.2 unsupervised 5 Related Work 6 Conclusion References Alfred V Aho and Margaret J Corasick. 1975. Efficient string matching: an aid to bibliographic search. Communications of the ACM, 18(6):333–340. J-I Aoe. 1989. An efficient digital search algorithm by using a double-array structure. IEEE Transactions on Software Engineering, 15(9):1066–1077. Christopher J. C. Burges, L´eon Bottou, Zoubin Ghahramani, and Kilian Q . Weinberger, editors. 2013. Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States. Xu Han, Zhiyuan Liu, and Maosong Sun. 2016. Joint representation learning of text and knowledge for knowledge graph completion. CoRR, abs/1611.04125. 5 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 46 46 46 46 46 46 46 46 46 47 47 47 47 47 47 47 47 47 47 48 48 48 48 48 48 48 48 48 48 49 49 49 49 49 49 49 49 49 49 y When encounter an mention of entity title tl, inspired by the idea of word sense disambiguation (WSD) task, we use the context information to distinguish existing mention senses, or create a new out-of-KB sense. To be concrete, each mention sense has an embedding (sense vector) ts l and a context cluster with center µ(ts l ). The representation of the context is defined as the average of the word vectors in the context: C(wi) = 1 |C(wi)| P wj2C(wi) wj. We predict ts l , the sense of entity title tl in the mention < tl, C(tl) >, when observed with context C(tl) as the context cluster membership. Formally, we have: ts l = ⇢ ts+ 1 l tmax l < λ tmax l otherwise (5) where λ is a hyper-parameter and tmax l = argmaxts l sim(µ(ts l ), C(tl)). We adopt an online non-parametric clustering procedure to learn outof-KB mention senses, which means that if the nearest distance of the context vector to sense cluster center is larger than a threshold, we create a new context cluster and a new sense vector that doesn’t belong to any entity-centric senses. The cluster center is the average of all the context vectors belonging to that cluster. For the similarity metric, we use cosine in our experiments. Here, we extend Skip-gram model to learn word embeddings as well as mention sense embeddings by the following objective to maximize the probability of observing the context words given either a word wi or a mention sense of entity title ts l : Lw = X wi,tl2D P(C(wi)|wi) + P(C(tl)|tl, ts l ) (6) C(·) (7) 1. directly align words with entity. 2. align mention with entity using single prototype model. 4.3 Parameter Setting 4.4 Qualitative Analysis before conducting the experiments on the tasks, we first give qualitative analysis of words, mentions and entities. firstly, we give the phrase embedding by its nearest words and entities. next, we give quantitative analysis on several tasks. 4.5 Entity Relatedness 4.6 Word Similarity 4.7 EL evaluation 4.7.1 gbdt 4.7.2 unsupervised 5 Related Work 6 Conclusion References Alfred V Aho and Margaret J Corasick. 1975. Efficient string matching: an aid to bibliographic search. Communications of the ACM, 18(6):333–340. J-I Aoe. 1989. An efficient digital search algorithm by using a double-array structure. IEEE Transactions on Software Engineering, 15(9):1066–1077. Christopher J. C. Burges, L´eon Bottou, Zoubin Ghahramani, and Kilian Q . Weinberger, editors. 2013. Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States. Xu Han, Zhiyuan Liu, and Maosong Sun. 2016. Joint representation learning of text and knowledge for knowledge graph completion. CoRR, abs/1611.04125. Mention Representation Learning 5 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 nearest distance of the context vector to sense clus ter center is larger than a threshold, we create a new context cluster and a new sense vector that doesn’t belong to any entity-centric senses. The cluster center is the average of all the context vectors belonging to that cluster. For the similarity metric, we use cosine in our experiments. Here, we extend Skip-gram model to learn word embeddings as well as mention sense embeddings by the following objective to maximize the probability of observing the context words given either a word wi or a mention sense of entity title ts l : Lw = X wi,tl2D P(C(wi)|wi) + P(C(tl)|tl, ts l ) (6) wi/ts l , , , w, , ej, e (7) 6 Conclusion References Alfred V Aho and Margaret J Corasick. 19 cient string matching: an aid to bibliograph Communications of the ACM, 18(6):333–3 J-I Aoe. 1989. An efficient digital search alg using a double-array structure. IEEE Tra on Software Engineering, 15(9):1066–107 Christopher J. C. Burges, L´eon Bottou, Zoubi mani, and Kilian Q . Weinberger, editors. vances in Neural Information Processin 26: 27th Annual Conference on Neural In Processing Systems 2013. Proceedings o ing held December 5-8, 2013, Lake Tahoe United States. Xu Han, Zhiyuan Liu, and Maosong Sun Joint representation learning of text an edge for knowledge graph completion. abs/1611.04125. 5 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 text C(tl) as the context cluster membership. Formally, we have: ts l = ⇢ ts+ 1 l tmax l < λ tmax l otherwise (5) where λ is a hyper-parameter and tmax l = argmaxts l sim(µ(ts l ), C(tl)). We adopt an online non-parametric clustering procedure to learn outof-KB mention senses, which means that if the nearest distance of the context vector to sense cluster center is larger than a threshold, we create a new context cluster and a new sense vector that doesn’t belong to any entity-centric senses. The cluster center is the average of all the context vectors belonging to that cluster. For the similarity metric, we use cosine in our experiments. Here, we extend Skip-gram model to learn word embeddings as well as mention sense embeddings by the following objective to maximize the probability of observing the context words given either a word wi or a mention sense of entity title ts l : Lw = X wi,tl2D P(C(wi)|wi) + P(C(tl)|tl, ts l ) (6) C(·) (7) g q y tasks. 4.5 Entity Relatedness 4.6 Word Similarity 4.7 EL evaluation 4.7.1 gbdt 4.7.2 unsupervised 5 Related Work 6 Conclusion References Alfred V Aho and Margaret J Corasick. 1975. Efficient string matching: an aid to bibliographic search. Communications of the ACM, 18(6):333–340. J-I Aoe. 1989. An efficient digital search algorithm by using a double-array structure. IEEE Transactions on Software Engineering, 15(9):1066–1077. Christopher J. C. Burges, L´eon Bottou, Zoubin Ghahramani, and Kilian Q . Weinberger, editors. 2013. Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States. Xu Han, Zhiyuan Liu, and Maosong Sun. 2016. Joint representation learning of text and knowledge for knowledge graph completion. CoRR, abs/1611.04125. 5 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 where λ is a hyper-parameter and tmax l = argmaxts l sim(µ(ts l ), C(tl)). We adopt an online non-parametric clustering procedure to learn outof-KB mention senses, which means that if the nearest distance of the context vector to sense cluster center is larger than a threshold, we create a new context cluster and a new sense vector that doesn’t belong to any entity-centric senses. The cluster center is the average of all the context vectors belonging to that cluster. For the similarity metric, we use cosine in our experiments. Here, we extend Skip-gram model to learn word embeddings as well as mention sense embeddings by the following objective to maximize the probability of observing the context words given either a word wi or a mention sense of entity title ts l : Lw = X wi,tl2D P(C(wi)|wi) + P(C(tl)|tl, ts l ) (6) C(·) (7) 4.7.1 gbdt 4.7.2 unsupervised 5 Related Work 6 Conclusion References Alfred V Aho and Margaret J Corasick. 1975. Efficient string matching: an aid to bibliographic search. Communications of the ACM, 18(6):333–340. J-I Aoe. 1989. An efficient digital search algorithm by using a double-array structure. IEEE Transactions on Software Engineering, 15(9):1066–1077. Christopher J. C. Burges, L´eon Bottou, Zoubin Ghahramani, and Kilian Q . Weinberger, editors. 2013. Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States. Xu Han, Zhiyuan Liu, and Maosong Sun. 2016. Joint representation learning of text and knowledge for knowledge graph completion. CoRR, abs/1611.04125. 5 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 ts l = ts+ 1 l tmax l < λ tmax l otherwise (5) where λ is a hyper-parameter and tmax l = argmaxts l sim(µ(ts l ), C(tl)). We adopt an online non-parametric clustering procedure to learn outof-KB mention senses, which means that if the nearest distance of the context vector to sense cluster center is larger than a threshold, we create a new context cluster and a new sense vector that doesn’t belong to any entity-centric senses. The cluster center is the average of all the context vectors belonging to that cluster. For the similarity metric, we use cosine in our experiments. Here, we extend Skip-gram model to learn word embeddings as well as mention sense embeddings by the following objective to maximize the probability of observing the context words given either a word wi or a mention sense of entity title ts l : Lw = X wi,tl2D P(C(wi)|wi) + P(C(tl)|tl, ts l ) (6) C(·) (7) 4.6 Word Similarity 4.7 EL evaluation 4.7.1 gbdt 4.7.2 unsupervised 5 Related Work 6 Conclusion References Alfred V Aho and Margaret J Corasick. 1975. Efficient string matching: an aid to bibliographic search. Communications of the ACM, 18(6):333–340. J-I Aoe. 1989. An efficient digital search algorithm by using a double-array structure. IEEE Transactions on Software Engineering, 15(9):1066–1077. Christopher J. C. Burges, L´eon Bottou, Zoubin Ghahramani, and Kilian Q . Weinberger, editors. 2013. Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States. Xu Han, Zhiyuan Liu, and Maosong Sun. 2016. Joint representation learning of text and knowledge for knowledge graph completion. CoRR, abs/1611.04125. 5 436 437 438 439 440 441 442 443 444 445 446 447 448 449 486 487 488 489 490 491 492 493 494 495 496 497 498 499 cluster center is the average of all the context vec tors belonging to that cluster. For the similarity metric, we use cosine in our experiments. Here, we extend Skip-gram model to learn word embeddings as well as mention sense embeddings by the following objective to maximize the probability of observing the context words given either a word wi or a mention sense of entity title ts l : Lw = X wi,tl2D P(C(wi)|wi) + P(C(tl)|tl, ts l ) (6) N(·) (7) g g g p Communications of the ACM, 18(6):333–340. J-I Aoe. 1989. An efficient digital search algorithm by using a double-array structure. IEEE Transactions on Software Engineering, 15(9):1066–1077. Christopher J. C. Burges, L´eon Bottou, Zoubin Ghahramani, and Kilian Q . Weinberger, editors. 2013. Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States. Xu Han, Zhiyuan Liu, and Maosong Sun. 2016. Joint representation learning of text and knowledge for knowledge graph completion. CoRR, abs/1611.04125. 5 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 ter center is larger than a threshold, we create a new context cluster and a new sense vector that doesn’t belong to any entity-centric senses. The cluster center is the average of all the context vectors belonging to that cluster. For the similarity metric, we use cosine in our experiments. Here, we extend Skip-gram model to learn word embeddings as well as mention sense embeddings by the following objective to maximize the probability of observing the context words given either a word wi or a mention sense of entity title ts l : Lw = X wi,tl2D P(C(wi)|wi) + P(C(tl)|tl, ts l ) (6) N(·) (7) References Alfred V Aho and Margaret J Corasick. 1975. Efficient string matching: an aid to bibliographic search. Communications of the ACM, 18(6):333–340. J-I Aoe. 1989. An efficient digital search algorithm by using a double-array structure. IEEE Transactions on Software Engineering, 15(9):1066–1077. Christopher J. C. Burges, L´eon Bottou, Zoubin Ghahramani, and Kilian Q . Weinberger, editors. 2013. Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States. Xu Han, Zhiyuan Liu, and Maosong Sun. 2016. Joint representation learning of text and knowledge for knowledge graph completion. CoRR, abs/1611.04125. 5 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 non-parametric clustering procedure to learn outof-KB mention senses, which means that if the nearest distance of the context vector to sense cluster center is larger than a threshold, we create a new context cluster and a new sense vector that doesn’t belong to any entity-centric senses. The cluster center is the average of all the context vectors belonging to that cluster. For the similarity metric, we use cosine in our experiments. Here, we extend Skip-gram model to learn word embeddings as well as mention sense embeddings by the following objective to maximize the probability of observing the context words given either a word wi or a mention sense of entity title ts l : Lw = X wi,tl2D P(C(wi)|wi) + P(C(tl)|tl, ts l ) (6) N(·) (7) 5 Related Work 6 Conclusion References Alfred V Aho and Margaret J Corasick. 1975. Efficient string matching: an aid to bibliographic search. Communications of the ACM, 18(6):333–340. J-I Aoe. 1989. An efficient digital search algorithm by using a double-array structure. IEEE Transactions on Software Engineering, 15(9):1066–1077. Christopher J. C. Burges, L´eon Bottou, Zoubin Ghahramani, and Kilian Q . Weinberger, editors. 2013. Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States. Xu Han, Zhiyuan Liu, and Maosong Sun. 2016. Joint representation learning of text and knowledge for knowledge graph completion. CoRR, abs/1611.04125. played it during public events, such as
 [[ ]] celebrations Mention Sense Mapping 5 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 tl otherwise where λ is a hyper-parameter and tmax l = argmaxts l sim(µ(ts l ), C(tl)). We adopt an online non-parametric clustering procedure to learn outof-KB mention senses, which means that if the nearest distance of the context vector to sense cluster center is larger than a threshold, we create a new context cluster and a new sense vector that doesn’t belong to any entity-centric senses. The cluster center is the average of all the context vectors belonging to that cluster. For the similarity metric, we use cosine in our experiments. Here, we extend Skip-gram model to learn word embeddings as well as mention sense embeddings by the following objective to maximize the probability of observing the context words given either a word wi or a mention sense of entity title ts l : Lw = X wi,tl2D P(C(wi)|wi) + P(C(tl)|tl, ts l ) (6) g(J uly 4th, e1) (7) 4.7 EL evaluation 4.7.1 gbdt 4.7.2 unsupervised 5 Related Work 6 Conclusion References Alfred V Aho and Margaret J Corasick. 1975. Efficient string matching: an aid to bibliographic search. Communications of the ACM, 18(6):333–340. J-I Aoe. 1989. An efficient digital search algorithm by using a double-array structure. IEEE Transactions on Software Engineering, 15(9):1066–1077. Christopher J. C. Burges, L´eon Bottou, Zoubin Ghahramani, and Kilian Q . Weinberger, editors. 2013. Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States. Xu Han, Zhiyuan Liu, and Maosong Sun. 2016. Joint representation learning of text and knowledge for knowledge graph completion. CoRR, abs/1611.04125. 3 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 learn two mention senses s1 1, s1 2 for m1, and one mention sense s2 2 for m2. Clearly, these two mentions share a common sense in the last two documents: the United States holiday e2, so we have s⇤ 2 = s1 2 = s2 2. Note that w, m , s are naturally embedded into the same semantic space since they are basic units in texts, and e modeling the graph structure in KB is actually in another semantic space. 3 Method In this section, we firstly describe the framework of MPME, followed by the detailed information of each key component. Then, we introduce a well designed mention sense disambiguation method, which can also be used for entity linking in a unsupervised way. eN ational Day s⇤ Independence Day (film), s⇤ Independence Day (US) 3.1 Framework Given KB, D and A, we are to jointly learn word, entity and mention representations: w, e, m . Serving as basic units in texts, Word {wi} and entity title {tl} are naturally embedded into a unified semantic space, meanwhile entities {ej} are mapped to one of mention senses of its title: ts l . Thus, text and knowledge are combined via the bridge of mentions. We can easily obtain the similarity between word and entity S imilarity(wi, ej) by computing the similarity between word and its corresponding mention sense: S imilarity(wi, f(ej)). As shown in Figure 2, our proposed MPME contains four key components: (1) Mention Sense Mapping: we map the anchor < mh, ej >2 A to the corresponding mention sense ts l to reduce the vocabulary to learn. (2) Entity Representation Learning given a knowledge base KB, we construct a knowledge network among entities, and given annotated we learn entity both contextual beddings in orde senses that has s sponding entity Representatio an iterative upd optimization ob beddings wi an own semantic sp the new learned inspires us to gl choosing mentio tion names in t mention sense c tion sense disam garded as linkin unsupervised w tion ? ? . 3.2 Mention S There are two k mention senses, tion senses. The beginning. Give tract entity title mention senses, on how many e latter is to find given mention n mention generat Conventional generally mainta and entity that knowledge base nizes the mentio matching. Or it fi names in texts u nition) tool, and didate entities vi Since this co paper, we adop 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 s⇤ 2 = s1 2 = s2 2. Note that w, m , s are naturally e bedded into the same semantic space since t are basic units in texts, and e modeling the gr structure in KB is actually in another seman space. 3 Method In this section, we firstly describe the framew of MPME, followed by the detailed information each key component. Then, we introduce a w designed mention sense disambiguation meth which can also be used for entity linking in a supervised way. eN ational Day s⇤ Independence Day (film), s⇤ Independence Day (U 3.1 Framework Given KB, D and A, we are to jointly le word, entity and mention representations: w m . Serving as basic units in texts, Word {w and entity title {tl} are naturally embedded i a unified semantic space, meanwhile entities { are mapped to one of mention senses of its tle: ts l . Thus, text and knowledge are co bined via the bridge of mentions. We can e ily obtain the similarity between word and tity S imilarity(wi, ej) by computing the simi ity between word and its corresponding ment sense: S imilarity(wi, f(ej)). As shown in Figure 2, our proposed MPM contains four key components: (1) Mention Se Mapping: we map the anchor < mh, ej >2 A the corresponding mention sense ts l to reduce vocabulary to learn. (2) Entity Representat Learning given a knowledge base KB, we c struct a knowledge network among entities, outlink Observed by category 3 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 ACL 2016 Submission ***. Confidential review copy. DO NOT DISTRIBUTE. KB, a text corpus D and a set of anchors A, multiprototype mention embedding is to learn multiple sense embeddings sjl 2 Rk for each mention ml as well as word embeddings w and entity embeddings e. Note that sl j 2 m⇤ l denotes that mention sense of ml refers to entity ej, where m⇤ l represents the sense set of ml. Different mentions may share the same mention sense, denoted as s⇤ j. Example As shown in Figure 1, there are two different mentions “Independence Day” m1 and “July 4th” m2 in the documents. MPME is to learn two mention senses s1 1, s1 2 for m1, and one mention sense s2 2 for m2. Clearly, these two mentions share a common sense in the last two documents: the United States holiday e2, so we have s⇤ 2 = s1 2 = s2 2. Note that w, m , s are naturally embedded into the same semantic space since they are basic units in texts, and e modeling the graph structure in KB is actually in another semantic space. 3 Method In this section, we firstly describe the framework of MPME, followed by the detailed information of each key component. Then, we introduce a well designed mention sense disambiguation method, which can also be used for entity linking in a unsupervised way. 3.1 Framework Given knowledge base KB, text corpus D and a set of anchors A, we are to jointly learn word, entity and mention representations: w, e, m . As shown in Figure 2, our proposed MPME contains four key components: (1) Mention Sense Mapping: given an anchor < ml, ej >, we map it to the corresponding mention sense to reduce the mention vocabulary to learn embeddings. If only a mention is given, we map it to several mention senses that requires disambiguation (Section 3.4). (2) Entity Representation Learning based on outlinks in Wikipedia pages, we construct a knowledge network to represent the semantic relatedness among entities. And then learn entity embeddings so that similar entities on the graph have similar representations. (3) Mention Representation Learning given mapped anchors in contexts, we learn mention sense embeddings by incorporating both textual context embeddings and entity embeddings. (4) Text Representation Learning we extend skip-gram model to simultaneously learn word and mention sense embeddings on annotated text corpus D0. Following (Yamada et al., 2016), we use wikipedia articles as text corpus, and the anchors provide annotated mentions1. We jointly train (2), (3) and (4) by using a unified optimization objective. The outputs embeddings of word and mention are naturally in the same semantic space since they are different units in annotated text corpus D0 for text representation learning. Entity embeddings keep their own semantics in another vector space, because we only use them as answers to predict in mention representation learning by extending Continuous BOW model, which will be further discussed in Section ? ? . s⇤ M emorial Day word embeddings wi and entity embeddings ej keep their own semantic space and are naturally bridged via the new learned entity title embeddings tl, which inspires us to globally optimize the probability of choosing mention senses of all the phrases of mention names in the given document. Since each mention sense corresponds to an entity, the mention sense disambiguation process can also be regarded as linking entities to knowledge base in a unsupervised way, which will be detailed in Section ? ? . 3.2 Mention Sense Mapping There are two kinds of mappings: from entities to mention senses, and from mention names to mention senses. The former is pre-defined at the very beginning. Given the knowledge Base KB, we extract entity titles {tl} and initialize with multiple mention senses, where the sense number depends on how many entities share a common title. The latter is to find possible mention senses for the given mention name, which is similar to candidate mention generation in entity linking task. Conventional candidate mention generation generally maintains a list of pairs of mention name and entity that denotes a candidate reference in knowledge base for the mention name, and recognizes the mention name in text by accurate string 1We can also annotate text corpus by using NER tool like python nltk to recognize mentions, and disambiguating its mapped mention senses as described in Section 3.4. This is an ongoing work with the goal of learning additional out-ofKB senses by self-training. In this paper, we will focus on the effectiveness of our model and the quality of three kinds of learned embeddings. 3 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 dings e. Note that sj 2 ml denotes that mention sense of ml refers to entity ej, where m⇤ l represents the sense set of ml. Different mentions may share the same mention sense, denoted as s⇤ j. Example As shown in Figure 1, there are two different mentions “Independence Day” m1 and “July 4th” m2 in the documents. MPME is to learn two mention senses s1 1, s1 2 for m1, and one mention sense s2 2 for m2. Clearly, these two mentions share a common sense in the last two documents: the United States holiday e2, so we have s⇤ 2 = s1 2 = s2 2. Note that w, m , s are naturally embedded into the same semantic space since they are basic units in texts, and e modeling the graph structure in KB is actually in another semantic space. 3 Method In this section, we firstly describe the framework of MPME, followed by the detailed information of each key component. Then, we introduce a well designed mention sense disambiguation method, which can also be used for entity linking in a unsupervised way. 3.1 Framework Given knowledge base KB, text corpus D and a set of anchors A, we are to jointly learn word, entity and mention representations: w, e, m . As shown in Figure 2, our proposed MPME contains four key components: (1) Mention Sense Mapping: given an anchor < ml, ej >, we map it to the corresponding mention sense to reduce the mention vocabulary to learn embeddings. If only a mention is given, we map it to several mention senses that requires disambiguation (Section 3.4). (2) Entity Representation Learning based on outlinks in Wikipedia pages, we construct a knowledge network to represent the semantic relatedness among entities. And then learn entity embeddings so that similar entities on the graph have similar representations. (3) Mention Representation Learning given mapped anchors in contexts, we learn mention sense embeddings by incorporating both textual context embeddings and entity embeddings. (4) Text Representation Learning we extend skip-gram model to simultaneously learn We jointly train (2), (3) and (4) by using a un fied optimization objective. The outputs embed dings of word and mention are naturally in th same semantic space since they are different uni in annotated text corpus D0 for text representatio learning. Entity embeddings keep their own se mantics in another vector space, because we onl use them as answers to predict in mention repre sentation learning by extending Continuous BOW model, which will be further discussed in Sectio 3.3.4. Figure 2 shows a real example of “” eM emorial Day word embeddings wi and entity embeddings e keep their own semantic space and are naturall bridged via the new learned entity title embed dings tl, which inspires us to globally optimiz the probability of choosing mention senses of a the phrases of mention names in the given docu ment. Since each mention sense corresponds to a entity, the mention sense disambiguation proce can also be regarded as linking entities to know edge base in a unsupervised way, which will b detailed in Section ? ? . 3.2 Mention Sense Mapping There are two kinds of mappings: from entities t mention senses, and from mention names to men tion senses. The former is pre-defined at the ver beginning. Given the knowledge Base KB, we ex tract entity titles {tl} and initialize with multip mention senses, where the sense number depend on how many entities share a common title. Th latter is to find possible mention senses for th given mention name, which is similar to candida mention generation in entity linking task. Conventional candidate mention generatio generally maintains a list of pairs of mention nam and entity that denotes a candidate reference i knowledge base for the mention name, and recog nizes the mention name in text by accurate strin matching. Or it firstly recognizes possible mentio 1We can also annotate text corpus by using NER tool lik python nltk to recognize mentions, and disambiguating i mapped mention senses as described in Section 3.4. This an ongoing work with the goal of learning additional out-o KB senses by self-training. In this paper, we will focus o the effectiveness of our model and the quality of three kin of learned embeddings. … holds annual [[Independence Day (US)| Independence Day]] celebrations and other festivals … … early Confederate [[Memorial Day]] celebrations were simple, somber occasions for veterans and their families to honor the dead … 5 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 cur in the same contexts. Similar to WDS, we maintain a context cluster for each mention sense, which can be used for disambiguation given the contexts (Section 5). For example, in d1 of Figure 2, the context cluster of s⇤consists of all context vectors When encountering a mention, the context vector we also maintain a context cluster center µ⇤ j for each mention sense s⇤ j, which is computed by averaging all the context vectors belonging to the cluster. We define context vector as the average sum of context word embeddings 1 |C(wi)| P wj2C(wi) wj. The cluster center is helpful for inducing mention sense in contexts. When encounter a mention, we map it to a set of mention senses, and then find the nearest one according to the distance from its context vector to each mention sense cluster center, which will be discussed in Section 5. d1, d2, d3, s⇤ j, wi/s⇤ j s⇤ Independence Day (US) P(ej|C(ml), s⇤ j) P(C(wi)|wi) · P(C(ml)|s⇤ j) (7) 4.5 Joint Training Considering all the above representation learning components, we define the overall objective function as linear combinations: L = Lw + Le + Lm (8) Based on language model, identifying mention senses in a document can be regarded as maximizing their joint probability. However, the global optimum is expensive, in which each mention gets an optimum sense, to search over the space of all mention senses of all mentions in the document. Thus, we approximately assign each mention independently: P(D0, . . . , sl j, . . . , ) ⇡ Y P(D0|sl j) · P(sl j) ⇡ Y P(C(ml)|sl j) · P( ˆ N(ml)|sl j) · P(sl j) (9) where P(C(ml)|sl j) is proportional to cosine similarity between context vector and mention sense cluster center µl j to measure the mention’s local similarity, namely local probability. ˆ N(ml) denotes neighbor mentions of ml cooccurring in a piece of text (e.g. a document), and P( ˆ N(ml)|sl j) is defined as global probability since it measures global coherence of neighbor mentions. The underlying idea is to achieve consistent semantics in a piece of text assuming that all mentions inside it are talking about the same topic. In this paper, we regard the mention senses identified first as neighbors of the rest mentions. P(sl j) denotes prior probability of a mention sense occurring in texts proportional to the frequency of corresponding entity in Wikipedia anchors: P(sl j) = (|Aej| |A| )γ γ 2 [0, 1] where γ is a hyper-parameter to smooth the gaps between different entity frequencies, namely smoothing parameter. It controls the importance 5 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 to the cluster. We define context vector as the average sum of context word embeddings 1 |C(wi)| P wj2C(wi) wj. The cluster center is helpful for inducing mention sense in contexts. When encounter a mention, we map it to a set of mention senses, and then find the nearest one according to the distance from its context vector to each mention sense cluster center, which will be discussed in Section 5. d1, d2, d3, s⇤ j, wi/s⇤ j s⇤ Independence Day (US) P(ej|C(ml), s⇤ j) P(C(wi)|wi) · P(C(ml)|s⇤ j) (7) 4.5 Joint Training Considering all the above representation learning components, we define the overall objective function as linear combinations: L = Lw + Le + Lm (8) P(D0, . . . , sl j, . . . , ) ⇡ Y P(D0|sl j) · P(sl j) ⇡ Y P(C(ml)|sl j) · P( ˆ N(ml)|sl j) · P(sl j) where P(C(ml)|sl j) is proportional to cosine s ilarity between context vector and mention se cluster center µl j to measure the mention’s lo similarity, namely local probability. ˆ N(ml) denotes neighbor mentions of ml occurring in a piece of text (e.g. a docume and P( ˆ N(ml)|sl j) is defined as global proba ity since it measures global coherence of neigh mentions. The underlying idea is to achieve c sistent semantics in a piece of text assuming all mentions inside it are talking about the sa topic. In this paper, we regard the mention sen identified first as neighbors of the rest mention P(sl j) denotes prior probability of a ment sense occurring in texts proportional to the quency of corresponding entity in Wikipedia chors: P(sl j) = (|Aej| |A| )γ γ 2 [0, 1] where γ is a hyper-parameter to smooth gaps between different entity frequencies, nam smoothing parameter. It controls the importa 5 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 cur in the same contexts. Similar to WDS, we maintain a context cluster for each mention sense, which can be used for disambiguation given the contexts (Section 5). For example, in d1 of Figure 2, the context cluster of s⇤consists of all context vectors When encountering a mention, the context vector we also maintain a context cluster center µ⇤ j for each mention sense s⇤ j, which is computed by averaging all the context vectors belonging to the cluster. We define context vector as the average sum of context word embeddings 1 |C(wi)| P wj2C(wi) wj. The cluster center is helpful for inducing mention sense in contexts. When encounter a mention, we map it to a set of mention senses, and then find the nearest one according to the distance from its context vector to each mention sense cluster center, which will be discussed in Section 5. d1, d2, d3, s⇤ j, wi/s⇤ j s⇤ Independence Day (US) P(ej|C(ml), s⇤ j) P(C(wi)|wi) · P(C(ml)|s⇤ j) (7) 4.5 Joint Training Considering all the above representation learning components, we define the overall objective function as linear combinations: L = Lw + Le + Lm (8) Based on language model, identifying mention senses in a document can be regarded as maximizing their joint probability. However, the global optimum is expensive, in which each mention gets an optimum sense, to search over the space of all mention senses of all mentions in the document. Thus, we approximately assign each mention independently: P(D0, . . . , sl j, . . . , ) ⇡ Y P(D0|sl j) · P(sl j) ⇡ Y P(C(ml)|sl j) · P( ˆ N(ml)|sl j) · P(sl j) (9) where P(C(ml)|sl j) is proportional to cosine similarity between context vector and mention sense cluster center µl j to measure the mention’s local similarity, namely local probability. ˆ N(ml) denotes neighbor mentions of ml cooccurring in a piece of text (e.g. a document), and P( ˆ N(ml)|sl j) is defined as global probability since it measures global coherence of neighbor mentions. The underlying idea is to achieve consistent semantics in a piece of text assuming that all mentions inside it are talking about the same topic. In this paper, we regard the mention senses identified first as neighbors of the rest mentions. P(sl j) denotes prior probability of a mention sense occurring in texts proportional to the frequency of corresponding entity in Wikipedia anchors: P(sl j) = (|Aej| |A| )γ γ 2 [0, 1] where γ is a hyper-parameter to smooth the gaps between different entity frequencies, namely smoothing parameter. It controls the importance 5 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 X wi,ml2D0 + log P(C(ml)|s⇤ j) (6) where s⇤ j = g(< ml, ej >) is obtained from anchors in wikipedia articles. Thus, similar words and mention senses will be closed in text space, such as wfilm and s⇤ Independence Day (film), or wcelebrations and s⇤ Independence Day (US) because they frequently occur in the same contexts. Similar to WDS, we maintain a context cluster for each mention sense, which can be used for disambiguation given the contexts (Section 5). For example, in d1 of Figure 2, the context cluster of s⇤consists of all context vectors When encountering a mention, the context vector we also maintain a context cluster center µ⇤ j for each mention sense s⇤ j, which is computed by averaging all the context vectors belonging to the cluster. We define context vector as the average sum of context word embeddings 1 |C(wi)| P wj2C(wi) wj. The cluster center is helpful for inducing mention sense in contexts. When encounter a mention, we map it to a set of mention senses, and then find the nearest one according to the distance from its context vector to each mention sense cluster center, which will be discussed in Section 5. d1, d2, d3, s⇤ j, wi/s⇤ j s⇤ Independence Day (US) P(ej|C(ml), s⇤ j) P(C(wi)|wi) · P(C(ml)|s⇤ j) (7) 4.5 Joint Training Considering all the above representation learning components, we define the overall objective function as linear combinations: L = Lw + Le + Lm (8) 5 Mention Sense Disambiguation MPME learns each mention with multiple sense embeddings, and each sense corresponds to a context cluster. Given an annotated document D0 including M mentions, and their sense sets according to Section ? ? : M⇤ l = {sl j|sl j 2 g(ml), ml 2 M}. In this section, we describe how to determine the mention sense for each mention ml in the document. Based on language model, identifying mention senses in a document can be regarded as maximizing their joint probability. However, the global optimum is expensive, in which each mention gets an optimum sense, to search over the space of all mention senses of all mentions in the document. Thus, we approximately assign each mention independently: P(D0, . . . , sl j, . . . , ) ⇡ Y P(D0|sl j) · P(sl j) ⇡ Y P(C(ml)|sl j) · P( ˆ N(ml)|sl j) · P(sl j) (9) where P(C(ml)|sl j) is proportional to cosine similarity between context vector and mention sense cluster center µl j to measure the mention’s local similarity, namely local probability. ˆ N(ml) denotes neighbor mentions of ml cooccurring in a piece of text (e.g. a document), and P( ˆ N(ml)|sl j) is defined as global probability since it measures global coherence of neighbor mentions. The underlying idea is to achieve consistent semantics in a piece of text assuming that all mentions inside it are talking about the same topic. In this paper, we regard the mention senses identified first as neighbors of the rest mentions. P(sl j) denotes prior probability of a mention sense occurring in texts proportional to the frequency of corresponding entity in Wikipedia anchors: P(sl j) = (|Aej| |A| )γ γ 2 [0, 1] where γ is a hyper-parameter to smooth the gaps between different entity frequencies, namely smoothing parameter. It controls the importance 5 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 Similar to WDS, we maintain a context cluster for each mention sense, which can be used for disambiguation given the contexts (Section 5). For example, in d1 of Figure 2, the context cluster of s⇤consists of all context vectors When encountering a mention, the context vector we also maintain a context cluster center µ⇤ j for each mention sense s⇤ j, which is computed by averaging all the context vectors belonging to the cluster. We define context vector as the average sum of context word embeddings 1 |C(wi)| P wj2C(wi) wj. The cluster center is helpful for inducing mention sense in contexts. When encounter a mention, we map it to a set of mention senses, and then find the nearest one according to the distance from its context vector to each mention sense cluster center, which will be discussed in Section 5. d1, d2, d3, s⇤ j, wi/s⇤ j s⇤ Independence Day (US) P(ej|C(ml), s⇤ j) P(C(wi)|wi) · P(C(ml)|s⇤ j) (7) 4.5 Joint Training Considering all the above representation learning components, we define the overall objective function as linear combinations: L = Lw + Le + Lm (8) Based on language model, identifying mention senses in a document can be regarded as maximizing their joint probability. However, the global optimum is expensive, in which each mention gets an optimum sense, to search over the space of all mention senses of all mentions in the document. Thus, we approximately assign each mention independently: P(D0, . . . , sl j, . . . , ) ⇡ Y P(D0|sl j) · P(sl j) ⇡ Y P(C(ml)|sl j) · P( ˆ N(ml)|sl j) · P(sl j) (9) where P(C(ml)|sl j) is proportional to cosine similarity between context vector and mention sense cluster center µl j to measure the mention’s local similarity, namely local probability. ˆ N(ml) denotes neighbor mentions of ml cooccurring in a piece of text (e.g. a document), and P( ˆ N(ml)|sl j) is defined as global probability since it measures global coherence of neighbor mentions. The underlying idea is to achieve consistent semantics in a piece of text assuming that all mentions inside it are talking about the same topic. In this paper, we regard the mention senses identified first as neighbors of the rest mentions. P(sl j) denotes prior probability of a mention sense occurring in texts proportional to the frequency of corresponding entity in Wikipedia anchors: P(sl j) = (|Aej| |A| )γ γ 2 [0, 1] where γ is a hyper-parameter to smooth the gaps between different entity frequencies, namely smoothing parameter. It controls the importance 5 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 Independence Day (film) s⇤ Independence Day (US) because they frequently occur in the same contexts. Similar to WDS, we maintain a context cluster for each mention sense, which can be used for disambiguation given the contexts (Section 5). For example, in d1 of Figure 2, the context cluster of s⇤consists of all context vectors When encountering a mention, the context vector we also maintain a context cluster center µ⇤ j for each mention sense s⇤ j, which is computed by averaging all the context vectors belonging to the cluster. We define context vector as the average sum of context word embeddings 1 |C(wi)| P wj2C(wi) wj. The cluster center is helpful for inducing mention sense in contexts. When encounter a mention, we map it to a set of mention senses, and then find the nearest one according to the distance from its context vector to each mention sense cluster center, which will be discussed in Section 5. d1, d2, d3, s⇤ j, wi/s⇤ j s⇤ Independence Day (US) P(ej|C(ml), s⇤ j) P(C(wi)|wi) · P(C(ml)|s⇤ j) (7) 4.5 Joint Training Considering all the above representation learning components, we define the overall objective function as linear combinations: L = Lw + Le + Lm (8) t e e t o se se o eac e t o ml t e doc ument. Based on language model, identifying mention senses in a document can be regarded as maximizing their joint probability. However, the global optimum is expensive, in which each mention gets an optimum sense, to search over the space of all mention senses of all mentions in the document. Thus, we approximately assign each mention independently: P(D0, . . . , sl j, . . . , ) ⇡ Y P(D0|sl j) · P(sl j) ⇡ Y P(C(ml)|sl j) · P( ˆ N(ml)|sl j) · P(sl j) (9) where P(C(ml)|sl j) is proportional to cosine similarity between context vector and mention sense cluster center µl j to measure the mention’s local similarity, namely local probability. ˆ N(ml) denotes neighbor mentions of ml cooccurring in a piece of text (e.g. a document), and P( ˆ N(ml)|sl j) is defined as global probability since it measures global coherence of neighbor mentions. The underlying idea is to achieve consistent semantics in a piece of text assuming that all mentions inside it are talking about the same topic. In this paper, we regard the mention senses identified first as neighbors of the rest mentions. P(sl j) denotes prior probability of a mention sense occurring in texts proportional to the frequency of corresponding entity in Wikipedia anchors: P(sl j) = (|Aej| |A| )γ γ 2 [0, 1] where γ is a hyper-parameter to smooth the gaps between different entity frequencies, namely smoothing parameter. It controls the importance 5 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 Thus, similar words and mention senses will be closed in text space, such as wfilm and s⇤ Independence Day (film), or wcelebrations and s⇤ Independence Day (US) because they frequently occur in the same contexts. Similar to WDS, we maintain a context cluster for each mention sense, which can be used for disambiguation given the contexts (Section 5). For example, in d1 of Figure 2, the context cluster of s⇤consists of all context vectors When encountering a mention, the context vector we also maintain a context cluster center µ⇤ j for each mention sense s⇤ j, which is computed by averaging all the context vectors belonging to the cluster. We define context vector as the average sum of context word embeddings 1 |C(wi)| P wj2C(wi) wj. The cluster center is helpful for inducing mention sense in contexts. When encounter a mention, we map it to a set of mention senses, and then find the nearest one according to the distance from its context vector to each mention sense cluster center, which will be discussed in Section 5. d1, d2, d3, s⇤ j, wi/s⇤ j s⇤ Independence Day (US) P(ej|C(ml), s⇤ j) P(C(wi)|wi) · P(C(ml)|s⇤ j) (7) 4.5 Joint Training Considering all the above representation learning components, we define the overall objective function as linear combinations: L = Lw + Le + Lm (8) ing to Section ? ? : M⇤ l = {sl j|sl j 2 g(ml), ml 2 M}. In this section, we describe how to determine the mention sense for each mention ml in the document. Based on language model, identifying mention senses in a document can be regarded as maximizing their joint probability. However, the global optimum is expensive, in which each mention gets an optimum sense, to search over the space of all mention senses of all mentions in the document. Thus, we approximately assign each mention independently: P(D0, . . . , sl j, . . . , ) ⇡ Y P(D0|sl j) · P(sl j) ⇡ Y P(C(ml)|sl j) · P( ˆ N(ml)|sl j) · P(sl j) (9) where P(C(ml)|sl j) is proportional to cosine similarity between context vector and mention sense cluster center µl j to measure the mention’s local similarity, namely local probability. ˆ N(ml) denotes neighbor mentions of ml cooccurring in a piece of text (e.g. a document), and P( ˆ N(ml)|sl j) is defined as global probability since it measures global coherence of neighbor mentions. The underlying idea is to achieve consistent semantics in a piece of text assuming that all mentions inside it are talking about the same topic. In this paper, we regard the mention senses identified first as neighbors of the rest mentions. P(sl j) denotes prior probability of a mention sense occurring in texts proportional to the frequency of corresponding entity in Wikipedia anchors: P(sl j) = (|Aej| |A| )γ γ 2 [0, 1] where γ is a hyper-parameter to smooth the gaps between different entity frequencies, namely smoothing parameter. It controls the importance 5 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 where s⇤ j = g(< ml, ej >) is obtained from anchors in wikipedia articles. Thus, similar words and mention senses will be closed in text space, such as wfilm and s⇤ Independence Day (film), or wcelebrations and s⇤ Independence Day (US) because they frequently occur in the same contexts. Similar to WDS, we maintain a context cluster for each mention sense, which can be used for disambiguation given the contexts (Section 5). For example, in d1 of Figure 2, the context cluster of s⇤consists of all context vectors When encountering a mention, the context vector we also maintain a context cluster center µ⇤ j for each mention sense s⇤ j, which is computed by averaging all the context vectors belonging to the cluster. We define context vector as the average sum of context word embeddings 1 |C(wi)| P wj2C(wi) wj. The cluster center is helpful for inducing mention sense in contexts. When encounter a mention, we map it to a set of mention senses, and then find the nearest one according to the distance from its context vector to each mention sense cluster center, which will be discussed in Section 5. d1, d2, d3, s⇤ j, wi/s⇤ j s⇤ Independence Day (US) P(ej|C(ml), s⇤ j) P(C(wi)|wi) · P(C(ml)|s⇤ j) (7) 4.5 Joint Training Considering all the above representation learning components, we define the overall objective function as linear combinations: L = Lw + Le + Lm (8) embeddings, and each sense corresponds to a context cluster. Given an annotated document D0 including M mentions, and their sense sets according to Section ? ? : M⇤ l = {sl j|sl j 2 g(ml), ml 2 M}. In this section, we describe how to determine the mention sense for each mention ml in the document. Based on language model, identifying mention senses in a document can be regarded as maximizing their joint probability. However, the global optimum is expensive, in which each mention gets an optimum sense, to search over the space of all mention senses of all mentions in the document. Thus, we approximately assign each mention independently: P(D0, . . . , sl j, . . . , ) ⇡ Y P(D0|sl j) · P(sl j) ⇡ Y P(C(ml)|sl j) · P( ˆ N (ml)|sl j) · P(sl j) (9) where P(C(ml)|sl j) is proportional to cosine similarity between context vector and mention sense cluster center µl j to measure the mention’s local similarity, namely local probability. ˆ N (ml) denotes neighbor mentions of ml cooccurring in a piece of text (e.g. a document), and P( ˆ N (ml)|sl j) is defined as global probability since it measures global coherence of neighbor mentions. The underlying idea is to achieve consistent semantics in a piece of text assuming that all mentions inside it are talking about the same topic. In this paper, we regard the mention senses identified first as neighbors of the rest mentions. P(sl j) denotes prior probability of a mention sense occurring in texts proportional to the frequency of corresponding entity in Wikipedia anchors: P(sl j) = (|Aej| |A| )γ γ 2 [0, 1] where γ is a hyper-parameter to smooth the gaps between different entity frequencies, namely smoothing parameter. It controls the importance 5 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 ing a mention, the context vector we also maintain a context cluster center µ⇤ j for each mention sense s⇤ j, which is computed by averaging all the context vectors belonging to the cluster. We define context vector as the average sum of context word embeddings 1 |C(wi)| P wj2C(wi) wj. The cluster center is helpful for inducing mention sense in contexts. When encounter a mention, we map it to a set of mention senses, and then find the nearest one according to the distance from its context vector to each mention sense cluster center, which will be discussed in Section 5. d1, d2, d3, s⇤ j, wi/s⇤ j, e3 s⇤ Independence Day (US) P(ej|C(ml), s⇤ j) P(C(wi)|wi) · P(C(ml)|s⇤ j) (7) 4.5 Joint Training Considering all the above representation learning components, we define the overall objective function as linear combinations: L = Lw + Le + Lm (8) Thus, we approximately assign each mention independently: P(D0, . . . , sl j, . . . , ) ⇡ Y P(D0|sl j) · P(sl j) ⇡ Y P(C(ml)|sl j) · P( ˆN(ml)|sl j) · P(sl j) (9) where P(C(ml)|sl j) is proportional to cosine similarity between context vector and mention sense cluster center µl j to measure the mention’s local similarity, namely local probability. ˆN(ml) denotes neighbor mentions of ml cooccurring in a piece of text (e.g. a document), and P( ˆN(ml)|sl j) is defined as global probability since it measures global coherence of neighbor mentions. The underlying idea is to achieve consistent semantics in a piece of text assuming that all mentions inside it are talking about the same topic. In this paper, we regard the mention senses identified first as neighbors of the rest mentions. P(sl j) denotes prior probability of a mention sense occurring in texts proportional to the frequency of corresponding entity in Wikipedia anchors: P(sl j) = (|Aej| |A| )γ γ 2 [0, 1] where γ is a hyper-parameter to smooth the gaps between different entity frequencies, namely smoothing parameter. It controls the importance Knowledge Space Text Space Figure 2: Framework of Multi-Prototype Mention Embedding model. Mention Sense Mapping To reduce the size of the mention vocabulary, each mention is mapped to a set of shared mention senses according to a predefined dictionary. We build the dictionary by collecting entity-mention pairs < ml, ej > from Wikipedia anchors and page titles, then create mention senses if there is a different entity. The sense number of a mention depends on how many different entity-mention pairs it is involved. Formally, we have: M∗ l = g(ml) = S g(< ml, ej >) = {s∗ j}, where g(·) denotes the mapping function from an entity mention to its mention sense given an anchor. We directly use the anchors contained in the annotated text corpus D ′ for training. As Figure 2 shows, we replace the anchor <July 4th, Independence Day (US)> with the corresponding mention sense: s∗ Independence Day (US). Representation Learning Using KB, A and D′ as input, we design three separate models and a unified optimization objective to jointly learn entity, word and mention sense representations into two semantic spaces. As shown in the knowledge space in Figure 2, entity embeddings can reflect their relatedness in the network. For example, Independence Day (US) (e1) and Memorial Day (e3) are close to each other because they share some common neighbors, such as United States and Public holidays in the United States. Word and mention embeddings are learned in the same semantic space. As two basic units in D′, their embeddings represent their distributed semantics in texts. For example, mention Independence Day and word celebrations co-occur frequently when it refers to the holiday: Independence Day (US), thus they have similar representations. Without disambiguating the mention senses, some words, such as film will also share similar representations as Independence Day. Besides, by introducing entity embeddings into our MPME framework, the knowledge information will also be distilled into mention sense embeddings, so that the mention sense Memorial Day will be similar as Independence Day (US). Mention Sense Disambiguation According to our predefined dictionary, each mention has been mapped to more than one senses, and learned with multiple embedding vectors. Consequently, to induce the correct sense for a mention within a context is critical in the usage of the multiprototype embeddings, especially in an unsupervised way. Formally, given an annotated document D′, we determine one sense ˆs∗ j ∈M∗ l for each mention ml ∈D′, where ˆs∗ j is the correct sense. Based on language model, we design a mention sense disambiguation method without using any supervision that takes into account three aspects: 1) sense prior denotes how dominant the sense is, 2) local context information reflects how semantically appropriate the sense is in the context, and 1625 3) global mention information denotes how semantically consistent the sense is with the neighbor mentions. To better utilize the context information, we maintain a context cluster for each mention sense during training, which will be detailed in Section 4.4. Since each mention sense corresponds to an entity in the given KB, the disambiguation method is equivalent to entity linking. Thus, text and knowledge base is bridged via the multiprototype mention embeddings. We will give more analysis in Section 6.4. 4 Representation Learning Distributional representation learning plays an increasing important role in many fileds (Bengio et al., 2013; Zhang et al., 2017, 2016) due to its effectiveness for dimensionality reduction and addressing sparseness issue. For NLP tasks, this trends has been accelerated by the Skip-gram and CBOW models (Mikolov et al., 2013a,b) due to its efficiency and remarkable semantic compositionality of embedding vectors. In this section, we first briefly introduce the Skip-gram and CBOW models, and then extend them to three variants for the word, mention and entity representation learning. 4.1 Skip-Gram and CBOW model The basic idea of the Skip-gram and CBOW models is to model the predictive relations among sequential words. Given a sequence of words D, the optimization objective of Skip-gram model is to use the current word to predict its context words by maximizing the average log probability: L = X wi∈D X wo∈C(wi) log P(wo|wi) (1) In contrast, CBOW model aims to predict the current word given its context words: L = X wi∈D log P(wi|C(wi)) (2) Formally, the conditional probability P(wo|wi) is defined using a softmax function: P(wo|wi) = exp(wi · wo) P wo∈D exp(wi · wo) (3) where wi, wo denote the input and output word vectors during training. Furthermore, these two models can be accelerated by using hierarchical softmax or negative sampling (Mikolov et al., 2013a,b). 4.2 Entity Representation Learning Given a knowledge base KB, we aim to learn entity embeddings by modeling “contextual” entities, so that the entities sharing more common neighbors tend to have similar representations. Therefore, we extend Skip-gram model to a network by maximizing the log probability of being a neighbor entity. Le = X ej∈E log P(N(ej)|ej) (4) Clearly, the neighbor entities serve a similar role as the context words in Skip-gram model. As shown in Figure 2, entity Memorial Day (e3) also share two common neighbors of United States and Public holidays in the United States with entity Independence Day (US), thus their embeddings are close in the Knowledge Space. These entity embeddings will be later used to learn mention representations. 4.3 Mention Representation Learning As mentioned above, the textual context information and reference entities are helpful to distinguish different senses for a mention. Thus, given an anchor < ml, ej > and its context words C(ml), we combine mention sense embeddings with its context word embeddings to predict the reference entity by extending CBOW model. The objective function is as follows: Lm = X <ml,ej>∈A log P(ej|C(ml), s∗ j) (5) where s∗ j = g(< ml, ej >). Thus, if two mentions refer to similar entities and share similar contexts, they tend to be close in semantic vector space. Take Figure 1 as an example again, mentions Independence Day and Memorial Day refer to similar entities Independence Day (US) (e1) and Memorial Day (e2), they also share some similar context words, such as celebrations in documents d2, d3, so their sense embeddings are close to each other in the text space. 4.4 Text Representation Learning Instead of directly using a word or a mention to predict the context words, we incorporate mention 1626 sense to joint optimize word and sense representations, which can avoid some noise introduced by ambiguous mentions. For example, in Figure 2, without identifying the mention Independence Day as the holiday or the film, various dissimilar context words such as the words celebrations and film in documents d1, d2 will share similar semantics, which will further affect the performance of entity representations during joint training. Given the annotated corpus D′, we use a word wi or a mention sense s∗ j to predict the context words by maximizing the following objective function: Lw = X wi,ml∈D′ log P(C(wi)|wi) + log P(C(ml)|s∗ j) (6) where s∗ j = g(< ml, ej >) is obtained from anchors in Wikipedia articles. Thus, words and mention senses will share the same vector space, where similar words and mention senses are close to each other, such as celebrations and Independence Day (US) because they frequently occur in the same contexts. Similar to WDS, we maintain a context cluster for each mention sense, which can be used for mention sense disambiguation (Section 5). The context cluster of a mention sense s∗ j contains all the context vectors of its mention ml. We compute context vector of ml by averaging the sum of its context word embeddings: 1 |C(ml)| P wj∈C(ml) wj. Further, the center of a context cluster µ∗ j is defined as the average of context vectors of all mentions which refer to the sense. These context clusters will be later used to disambiguate the sense of a given mention with its contexts. 4.5 Joint Training Considering all of the above representation learning components, we define the overall objective function as linear combinations: L = Lw + Le + Lm (7) The goal of training MPME is to maximize the above function, and iteratively update three types of embeddings. Also, we use negative sampling technique for efficiency (Mikolov et al., 2013a). MPME shares the same entity representation learning method with (Yamada et al., 2016), but the role of entities in the entire framework as well as mention representation learning is different in three aspects. First, we focus on learning embeddings for mentions, not merely words as in (Yamada et al., 2016). Clearly, MPME is more natural to integrate text and knowledge base. Second, we propose to learn multiple embeddings for each mention denoting its different meanings. Third, we prefer to use both mentions and context words to predict entities, so that the distribution of entities will help improve word embeddings, meanwhile, avoid being hurt if we force entity embeddings to satisfy word embeddings during training (Wang et al., 2014). We will give more analysis in experiments. 5 Mention Sense Disambiguation As mentioned in Section 3, we induce a correct sense ˆs∗ j ∈M∗ l for each mention ml in an annotated document D′. We regard this problem from the perspective of language model that maximizes a joint probability of all mention senses contained in the document. However, the global optimum is expensive with a time complexity of O(|M||M∗ l |). Thus, we approximately identify each mention sense independently: P(D′, . . . , s∗ j, . . . , ) ≈ Y P(D′|s∗ j) · P(s∗ j) ≈ Y P(C(ml)|s∗ j) · P( ˆ N(ml)|s∗ j) · P(s∗ j) (8) where P(C(ml)|s∗ j), local context information (Section 3), denotes the probability of the local contexts of ml given its mention sense s∗ j. we define it proportional to the cosine similarity between the current context vector and the sense context cluster center µ∗ j as described in Section 4.4. It measures how likely a mention sense occurring together with current context words. For example, given the mention sense Independence Day (film), word film is more likely to appear within the context than the word celebrations. P( ˆ N(ml)|sl j), global mention information, denotes the probability of the contextual mentions of ml given its sense sl j, where ˆ N(ml) is the collection of the neighbor mentions occurring together with ml in a predefined context window. We define it proportional to the cosine similarity between mention sense embeddings and the neighbor mention vector, which is computed similar to 1627 context vector: P 1 | ˆ N(ml)|ˆsl j, where ˆsl j is the correct sense for ml. Considering there are usually multiple mentions in a document to be disambiguated. The mentions disambiguated first will be helpful for inducing the senses of the rest mentions. That is, how to choose the mentions disambiguated first will influence the performance. Intuitively, we adopt two orders similar to (Chen et al., 2014): 1) L2R (left to right) induces senses for all the mentions in the document following natural order that varies according to language, normally from left to right in the sequence. 2) S2C (simple to complex) denotes that we determine the correct sense for those mentions with fewer senses, which makes the problem easier. Global mention information assumes that there should be consistent semantics in a context window, and measures whether all neighbor mentions are related. For instance, two mentions Memorial Day and Independence Day occur in the same document. If we already know that Memorial Day denotes a holiday, then obviously Independence Day has higher probability of being a holiday than a film. P(s∗ j), sense prior, is a prior probability of sense s∗ j indicating how possible it occurs without considering any additional information. We define it proportional to the frequency of sense s∗ j in Wikipedia anchors: P(s∗ j) = ( |As∗ j | |A| )γ γ ∈[0, 1] where As∗ j is the set of anchors annotated with s∗ j, and γ is a smoothing hyper-parameter to control the impact of prior on the overall probability, which is set by experiments (Section 6.4). 6 Experiment Setup We choose Wikipedia, the March 2016 dump, as training corpus, which contains nearly 75 millions of anchors, 180 millions of edges among entities and 1.8 billions of tokens after preprocessing. We then train MPME2 for 1.5 millions of words, 5 millions of entities and 1.7 millions of mentions. The entire training process in 10 iterations costs nearly 8 hours on the server with 64 core CPU and 188GB memory. 2Our main code for MPME can be found in https://github.com/TaoMiner/bridgeGap. We use the default settings in word2vec3, and set our embedding dimension as 200 and context window size as 5. For each positive example, we sample 5 negative examples4. Baseline Methods As far as we know, this is the first work to deal with mention ambiguity in the integration of text and knowledge representations, so there is no exact baselines for comparison. We use the method in (Yamada et al., 2016) as a baseline, marked as ALIGN5, because (1) this is the most similar work that directly aligns word and entity embeddings. (2) it achieves the state-of-the-art performance in entity linking task. To investigate the effect of multi-prototype, we degrade our method to single-prototype as another baseline, which means to use one sense to represent all mentions with the same phrase, namely Single-Prototype Mention Embedding (SPME). For example, SPME only learns one unique sense vector for Independence Day whatever it denotes a holiday or a film. 6.1 Qualitative Analysis We use cosine similarity to measure the similarity of two vectors, and present the top 5 nearest words and entities for two most popular senses of the mention Independence Day. Because ALIGN is incapable of dealing with multiple words, we only present the results of SPME and MPME. As shown in Figure 1, without considering mention sense, the mention Independence Day can only show a dominant holiday sense based on SPME and ignore all other senses. Instead, MPME successfully learns two clear and distinct senses. For the sense Independence Day (US), all of its nearest words and entities, such as parades, celebrations, and Memorial Day, are holiday related, while for another sense Independence Day (film), its nearest words and entities, like robocop and The Terminator, are all science fiction films. The results verify the effectiveness of our framework in learning mention embeddings at the sense level. 3https://code.google.com/archive/p/word2vec/ 4We tested different parameters (e.g. window size of 10 and dimension of 500) which achieve similar results, and report the current settings considering program runtime efficiency. 5We carefully re-implemented ALIGN and used the same shared parameters in our model for fairly comparison. However, we failed to fully reproduce the positive result in the original paper, meanwhile the authors are unable to release their code. 1628 Mention Sense Nearest words Nearest entities SPME Independence Day lee-jackson, thanksgiving, diwali, strassenfest, chiraghan National Aboriginal and Torres Strait Islander Education Policy, E. Chandrasekharan Nair, Jean Aileen Little, Thessalian barbel, 1825 in birding and ornithology MPME Independence Day (US) thanksgiving, parades, leejackson, festivities, celebrations Memorial Day, Labor Day, Thanksgiving, Thanksgiving (United States), Saint Patrick’s Day Independence Day (film) robocop, clockstoppers, mindhunters, tarantino, terminator The Terminator, True Lies, Total Recall (1990 film), RoboCop 2, Die Hard Table 1: The nearest neighbors of mention Independence Day. 6.2 Entity Relatedness To evaluate the quality of entity embeddings, we conduct experiments using the dataset which is designed for measuring entity relatedness (Ceccarelli et al., 2013; Huang et al., 2015; Yamada et al., 2016). The dataset contains 3,314 entities, and each mention has 91 candidate entities on average with gold-standard labels indicating whether they are semantically related. We compute cosine similarity between entity embeddings to measure their relatedness, and rank them in a descending order. To evaluate the ranking quality, we use two standard metrics: normalized discounted cumulative gain (NDCG) (J¨arvelin and Kek¨al¨ainen, 2002) and mean average precision (MAP) (Sch¨utze, 2008). We design another baseline method: Entity2vec, which learns entity embeddings using the method described in Section 4.2, without joint training with word and mention sense embeddings. Table 2: Entity Relatedness. NDCG MAP @1 @5 @10 ALIGN 0.416 0.432 0.472 0.410 Entity2vec 0.593 0.595 0.636 0.566 SPME 0.593 0.594 0.636 0.566 MPME 0.613 0.613 0.654 0.582 As shown in Table 2, ALIGN achieves lower performance than Entity2vec, because it doesn’t consider the mention phrase ambiguity and yields lots of noise when forcing entity embeddings to satisfy word embeddings and aligning them into the unified space. For example, the entity Gente (magazine) should be more relevant to the entity France, the place where its company locates. However, ALIGN mixed various meanings of mention Gente (e.g., the song) and ranked some bands higher (e.g., entity Poolside (band)). SPME also doesn’t consider the ambiguity of mentions but achieves comparative results with Entity2vec. We analyze the reasons and find that, it can avoid some noise by using word embeddings to predict entities. MPME outperforms all the other methods, which demonstrates that the unambiguous textual information is helpful to refine the entity embeddings. 6.3 Word Analogical Reasoning Following (Mikolov et al., 2013a; Wang et al., 2014), we use the word analogical reasoning task to evaluate the quality of word embeddings. The dataset consists of 8,869 semantic questions (“Paris”:“France”::“Rome”:?), and 10,675 syntactic questions (e.g., “sit”:“sitting”::“walk”:?). We solve it by finding the closest word vector w? to wFrance−wParis+wRome according to cosine similarity. We compute accuracy for top 1 nearest word to measure the performance. Table 3: Word Analogical Reasoning. Word2vec ALIGN SPME MPME Semantic 66.78 68.34 71.65 71.65 Syntactic 61.58 59.73 55.28 54.75 We also adopt Word2vec6 as an additional baseline method, which provides a standard to measure the impact from other components on word embeddings. Table 3 shows the results. We can see that ALIGN, SPME and MPME, achieve higher performance in dealing with semantic questions, because relations among entities (e.g., country-capital relation for entity France and Paris) enhance the semantics in word embeddings through jointly training. On the other hand, their performance for syntactic questions is weakened because more accurate semantics yields a bias to predict semantic relations even though given a syntactic query. For example, given the query “pleasant”:“unpleasant”::“possibly”:?, our 6https://code.google.com/archive/p/word2vec/ 1629 model tends to return the word (e.g., probably) highly semantical related to query words, such as possibly, instead of the syntactical similar word impossibly. In this scenario, we are more concerned about semantic task to incorporate knowledge of reference entities into word embeddings, and this issue could be tackled, to some extent, by using syntactic tool like stemming. The word embeddings of MPME achieve the best performance for semantic questions mainly because (1) text representation learning has better generalization ability due to the larger size of training examples than entities (e.g., 1.8b v.s. 0.18b) as well as relatively smaller size of vocabulary (e.g., 1.5m v.s. 5m). (2) unambiguous mention embeddings capture both textual context information and knowledge, and thus enhance word and entity embeddings. 6.4 A Case Study: Entity Linking Entity linking is a core NLP task of identifying the reference entity for mentions in texts. The main difficulty lies in the ambiguity of various entities sharing the same mention phrase. Previous work addressed this issue by taking advantage of the similarity between words and entities (FrancisLandau et al., 2016; Sun et al., 2015), and/or the relations among entities (Thien Huu Nguyen, 2016; Cao et al., 2015). Therefore, we use entity linking as a case study for a comprehensive measurement of the multi-prototype mention embeddings. Given mentions in a text, entity linking aims to link them to a predefined knowledge base. One of the main challenges in this task is the ambiguity of entity mentions. We use the public dataset AIDA created by (Hoffart et al., 2011), which includes 1,393 documents and 27,816 mentions referring to Wikipedia entries. The dataset has been divided into 946, 216 and 231 documents for the purpose of training, developing and testing. Following (Pershina et al., 2015; Yamada et al., 2016), we use a publicly available dictionary to generate candidate entities and mention senses. For evaluation, we rank the candidate entities for each mention and report both standard micro (aggregates over all mentions) and macro (aggregates over all documents) precision over top-ranked entities. Supervised Entity Linking Yamada et al. (2016) designed a list of features for each mention and candidate entity pair. By incorporating these features into a supervised learning-to-rank algorithm, Gradient Boosting Regression Tree (GBRT), each pair is assigned a relevance score indicating whether they should be linked to each other. Following their recommended parameters, we set the number of trees as 10,000, the learning rate as 0.02 and the maximum depth of the decision tree as 4. Based on word and entity embeddings learned by ALIGN, the key features in (Yamada et al., 2016) are from two aspects: (1) the cosine similarity between context words and candidate entity, and (2) the coherence among “contextual” entities in the same document. To evaluate the performance of multi-prototype mention embeddings, we incorporate the following features into GBDT for comparison: (1) the cosine similarity between the current context vector and the sense context cluster center µ∗ j, which denotes how likely the mention sense refers to the candidate entity, (2) the cosine similarity between the current context vector and the mention sense embeddings. Table 4: Performance of Supervised Method ALIGN SPME MPME Micro P@1 0.828 0.820 0.851 Macro P@1 0.862 0.844 0.881 As shown in Table 4, we can see that ALIGN performs better than SPME. This is because SPME learns word embeddings and entity embeddings in separate semantic spaces, and fails to measure the similarity between context words and candidate entities. However, MPME computes the similarity between context words with mention sense instead of entities, thus achieves the best performance, which also demonstrates the high quality of the mention sense embeddings. Unsupervised Entity Linking Linking a mention to a specific entity equals to disambiguating mention senses since each candidate entity corresponds to a mention sense. As described in Section 5, we disambiguate senses in two orders: (1) L2R (from left to right), and (2) S2C (from simple to complex). We evaluate our unsupervised disambiguation methods on the entire AIDA dataset. To be fair, we choose the state-of-the-art unsupervised methods, which are proposed in (Hoffart et al., 2011; Alhelbawy and Gaizauskas, 2014; Cucerzan, 2007; 1630 Table 5: Performance of Unsupervised Methods Cucerzan Kulkarni Hoffart Shirakawa Alhelbawy MPME (L2R) MPME (S2C) Micro P@1 0.510 0.729 0.818 0.823 0.842 0.882 0.885 Macro P@1 0.437 0.767 0.819 0.830 0.875 0.875 0.890 Kulkarni et al., 2009; Masumi Shirakawa and Nishio, 2011) using the same dataset. Table 5 shows the results. We can see that our two methods outperform all other methods. MPME (L2R) is more efficient and easy to apply, while MPME (S2C) slightly outperforms it because the additional step of ranking mentions according to their candidates number guarantees a higher disambiguation performance for those simple mentions, which consequently help disambiguate those complex mentions through global mention information in Equation 8. We analyze the results and observe a disambiguation bias to popular senses. For example, there are three mentions in the sentence “Japan began the defence of their Asian Cup I title with a lucky 2-1 win against Syria in a Group C championship match on Friday”, where the country name Japan and Syria actually denote their national football teams, while the football match name Asian Cup I has little ambiguity. Compared to the team, the sense of country occurs more frequently and has a dominant prior, which greatly affects the disambiguation. By incorporating local context information and global mention information, both the context words (e.g., defence or match) and the neighbor mentions (e.g., Asian Cup I) provide us enough clues to identify a soccer related mention sense instead of the country. Influence of Smoothing Parameter As mentioned above, a mention sense may possess a dominant prior and greatly affect the disambiguation. So we introduce a smoothing parameter γ to control its importance to the overall probability. Figure 3 shows the linking accuracy under different values of γ on the dataset of AIDA. γ = 0 indicates we don’t use any prior knowledge, and γ = 1 indicates the case without smoothing parameter. We can see that both micro and macro accuracy decrease a lot if we don’t use the parameter (γ = 1). Only using local and global probabilities for disambiguation (γ = 0) achieves a comparable performance when γ = 0.05, both accuracy reach their peaks, which is optimal and default value in our experiments. Figure 3: Impact of Smoothing Parameter γ. 7 Conclusions and Future Work In this paper, we propose a novel Multi-Prototype Mention Embedding model that jointly learns word, entity and mention sense embeddings. These mention senses capture both textual context information and knowledge from reference entities, and provide an efficient approach to disambiguate mention sense in text. We conduct a series of experiments to demonstrate that multiprototype mention embedding improves the quality of both word and entity representations. Using entity linking as a study case, we apply our disambiguation method as well as the multi-prototype mention embeddings on the benchmark dataset, and achieve the state-of-the-art. In the future, we will improve the scalability of our model and learn multi-prototype embeddings for the mentions without reference entities in a knowledge base, and introduce compositional approaches to model the internal structures of multiword mentions. 8 Acknowledgement This work is supported by NSFC Key Program (No. 61533018), 973 Program (No. 2014CB340504), Fund of Online Education Research Center, Ministry of Education (No. 2016ZD102), Key Technologies Research and Development Program of China (No. 2014BAK04B03), NSFC-NRF (No. 61661146007) and the U.S. DARPA LORELEI Program No. HR0011-15-C-0115. 1631 References Ayman Alhelbawy and Robert J Gaizauskas. 2014. Graph ranking for collective named entity disambiguation. In ACL (2). pages 75–80. Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2013. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence 35(8):1798–1828. Yixin Cao, Juanzi Li, Xiaofei Guo, Shuanhu Bai, Heng Ji, and Jie Tang. 2015. Name list only? target entity disambiguation in short texts. In EMNLP. pages 654–664. https://doi.org/10.18653/v1/D15-1077. Diego Ceccarelli, Claudio Lucchese, Salvatore Orlando, Raffaele Perego, and Salvatore Trani. 2013. Learning relatedness measures for entity linking. In Proceedings of the 22nd ACM international conference on Information & Knowledge Management. ACM, pages 139–148. Xinxiong Chen, Zhiyuan Liu, and Maosong Sun. 2014. A unified model for word sense representation and disambiguation. In EMNLP. Citeseer, pages 1025– 1035. https://doi.org/10.3115/v1/D14-1110. Silviu Cucerzan. 2007. Large-scale named entity disambiguation based on wikipedia data . Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In Proceedings of the 43rd annual meeting on association for computational linguistics. Association for Computational Linguistics, pages 363– 370. Matthew Francis-Landau, Greg Durrett, and Dan Klein. 2016. Capturing semantic similarity for entity linking with convolutional neural networks. In Proceedings of NAACL-HLT. pages 1256–1261. https://doi.org/10.18653/v1/N16-1150. Xu Han, Zhiyuan Liu, and Maosong Sun. 2016. Joint representation learning of text and knowledge for knowledge graph completion. CoRR abs/1611.04125. Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bordino, Hagen F¨urstenau, Manfred Pinkal, Marc Spaniol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum. 2011. Robust disambiguation of named entities in text. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 782–792. Eric H Huang, Richard Socher, Christopher D Manning, and Andrew Y Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proc. ACL. Hongzhao Huang, Larry Heck, and Heng Ji. 2015. Leveraging deep neural networks and knowledge graphs for entity disambiguation. arXiv preprint arXiv:1504.07678 . Lifu Huang, Jonathan May, Xiaoman Pan, Heng Ji, Xiang Ren, Jiawei Han, Lin Zhao, and James A Hendler. 2017. Liberal entity extraction: Rapid construction of fine-grained entity typing systems. Big Data 5(1):19–31. Kalervo J¨arvelin and Jaana Kek¨al¨ainen. 2002. Cumulated gain-based evaluation of ir techniques. ACM Transactions on Information Systems (TOIS) 20(4):422–446. Sayali Kulkarni, Amit Singh, Ganesh Ramakrishnan, and Soumen Chakrabarti. 2009. Collective annotation of wikipedia entities in web text. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pages 457–466. Jiwei Li and Dan Jurafsky. 2015. Do multi-sense embeddings improve natural language understanding? In Proc. EMNLP. https://doi.org/10.18653/v1/D151200. Massimiliano Mancini, Jos´e Camacho-Collados, Ignacio Iacobacci, and Roberto Navigli. 2016. Embedding words and senses together via joint knowledgeenhanced training. CoRR abs/1612.02703. Haixun Wang Yangqiu Song Zhongyuan Wang Kotaro Nakayama Takahiro Hara Masumi Shirakawa and Shojiro Nishio. 2011. Entity disambiguation based on a. technical report. In Technical Report MSR-TR2011-125. Microsoft Research. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. CoRR abs/1301.3781. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In NIPS. pages 3111–3119. Arvind Neelakantan, Jeevan Shankar, Alexandre Passos, and Andrew McCallum. 2014. Efficient non-parametric estimation of multiple embeddings per word in vector space. In Proc. EMNLP. https://doi.org/10.3115/v1/D14-1113. Maria Pershina, Yifan He, and Ralph Grishman. 2015. Personalized page rank for named entity disambiguation. In HLT-NAACL. pages 238–243. Joseph Reisinger and Raymond J Mooney. 2010. Multi-prototype vector-space models of word meaning. In Proc. NAACL. Hinrich Sch¨utze. 2008. Introduction to information retrieval. In Proceedings of the international communication of association for computing machinery conference. 1632 Yaming Sun, Lei Lin, Duyu Tang, Nan Yang, Zhenzhou Ji, and Xiaolong Wang. 2015. Modeling mention, context and entity with neural networks for entity disambiguation. In IJCAI. pages 1333–1339. Nicolas Fauceglia Mariano Rodriguez-Muro Oktie Hassanzadeh Alfio Massimiliano Gliozzo Mohammad Sadoghi Thien Huu Nguyen. 2016. Joint learning of local and global features for entity linking via neural networks. In COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, December 11-16, 2016, Osaka, Japan. pages 2310– 2320. Fei Tian, Hanjun Dai, Jiang Bian, Bin Gao, Rui Zhang, Enhong Chen, and Tie-Yan Liu. 2014. A probabilistic model for learning multi-prototype word embeddings. In COLING. Kristina Toutanova, Danqi Chen, Patrick Pantel, Pallavi Choudhury, and Michael Gamon. 2015. Representing text for joint embedding of text and knowledge bases. ACL Association for Computational Linguistics https://doi.org/10.18653/v1/D15-1174. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph and text jointly embedding. In Proc. EMNLP. https://doi.org/10.3115/v1/D14-1167. Zhigang Wang and Juan-Zi Li. 2016. Text-enhanced representation learning for knowledge graph. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence. Jason Weston, Antoine Bordes, Oksana Yakhnenko, and Nicolas Usunier. 2013. Connecting language and knowledge bases with embedding models for relation extraction. In Proc. ACL. Jiawei Wu, Ruobing Xie, Zhiyuan Liu, and Maosong Sun. 2016. Knowledge representation via joint learning of sequential text and knowledge graphs. CoRR . Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, and Yoshiyasu Takefuji. 2016. Joint learning of the embedding of words and entities for named entity disambiguation. In Proc. CoNLL. https://doi.org/10.18653/v1/K16-1025. Hanwang Zhang, Zawlin Kyaw, Shih-Fu Chang, and Tat-Seng Chua. 2017. Visual translation embedding network for visual relation detection. arXiv preprint arXiv:1702.08319 . Hanwang Zhang, Xindi Shang, Wenzhuo Yang, Huan Xu, Huanbo Luan, and Tat-Seng Chua. 2016. Online collaborative learning for open-vocabulary visual classifiers. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pages 2809–2817. 1633
2017
149
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 158–167 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1015 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 158–167 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1015 Program Induction by Rationale Generation: Learning to Solve and Explain Algebraic Word Problems Wang Ling♠ Dani Yogatama♠ Chris Dyer♠ Phil Blunsom♠♦ ♠DeepMind ♦University of Oxford {lingwang,dyogatama,cdyer,pblunsom}@google.com Abstract Solving algebraic word problems requires executing a series of arithmetic operations—a program—to obtain a final answer. However, since programs can be arbitrarily complicated, inducing them directly from question-answer pairs is a formidable challenge. To make this task more feasible, we solve these problems by generating answer rationales, sequences of natural language and human-readable mathematical expressions that derive the final answer through a series of small steps. Although rationales do not explicitly specify programs, they provide a scaffolding for their structure via intermediate milestones. To evaluate our approach, we have created a new 100,000-sample dataset of questions, answers and rationales. Experimental results show that indirect supervision of program learning via answer rationales is a promising strategy for inducing arithmetic programs. 1 Introduction Behaving intelligently often requires mathematical reasoning. Shopkeepers calculate change, tax, and sale prices; agriculturists calculate the proper amounts of fertilizers, pesticides, and water for their crops; and managers analyze productivity. Even determining whether you have enough money to pay for a list of items requires applying addition, multiplication, and comparison. Solving these tasks is challenging as it involves recognizing how goals, entities, and quantities in the real-world map onto a mathematical formalization, computing the solution, and mapping the solution back onto the world. As a proxy for the richness of the real world, a series of papers have used natural language specifications of algebraic word problems, and solved these by either learning to fill in templates that can be solved with equation solvers (Hosseini et al., 2014; Kushman et al., 2014) or inferring and modeling operation sequences (programs) that lead to the final answer (Roy and Roth, 2015). In this paper, we learn to solve algebraic word problems by inducing and modeling programs that generate not only the answer, but an answer rationale, a natural language explanation interspersed with algebraic expressions justifying the overall solution. Such rationales are what examiners require from students in order to demonstrate understanding of the problem solution; they play the very same role in our task. Not only do natural language rationales enhance model interpretability, but they provide a coarse guide to the structure of the arithmetic programs that must be executed. In fact the learner we propose (which relies on a heuristic search; §4) fails to solve this task without modeling the rationales—the search space is too unconstrained. This work is thus related to models that can explain or rationalize their decisions (Hendricks et al., 2016; Harrison et al., 2017). However, the use of rationales in this work is quite different from the role they play in most prior work, where interpretation models are trained to generate plausible sounding (but not necessarily accurate) posthoc descriptions of the decision making process they used. In this work, the rationale is generated as a latent variable that gives rise to the answer—it is thus a more faithful representation of the steps used in computing the answer. This paper makes three contributions. First, we have created a new dataset with more than 100,000 algebraic word problems that includes both answers and natural language answer rationales (§2). Figure 1 illustrates three representative instances 158 Problem 1: Question: Two trains running in opposite directions cross a man standing on the platform in 27 seconds and 17 seconds respectively and they cross each other in 23 seconds. The ratio of their speeds is: Options: A) 3/7 B) 3/2 C) 3/88 D) 3/8 E) 2/2 Rationale: Let the speeds of the two trains be x m/sec and y m/sec respectively. Then, length of the first train = 27x meters, and length of the second train = 17 y meters. (27x + 17y) / (x + y) = 23 →27x + 17y = 23x + 23y →4x = 6y →x/y = 3/2. Correct Option: B Problem 2: Question: From a pack of 52 cards, two cards are drawn together at random. What is the probability of both the cards being kings? Options: A) 2/1223 B) 1/122 C) 1/221 D) 3/1253 E) 2/153 Rationale: Let s be the sample space. Then n(s) = 52C2 = 1326 E = event of getting 2 kings out of 4 n(E) = 4C2 = 6 P(E) = 6/1326 = 1/221 Answer is C Correct Option: C Problem 3: Question: For which of the following does p(a)−p(b) = p(ab) for all values of a and b? Options:A) p(x) = x2, B) p(x) = x/2, C) p(x) = x + 5, D) p(x) = 2x1, E) p(x) = |x| Rationale: To solve this easiest way is just put the value and see that if it equals or not. with option A. p(a) = a2 and p(b) = b2 so L.H.S = a2 −b2 and R.H.S = (a −b)2 →a2 + b2 −2ab. so L.H.S not equal to R.H.S with option B. p(a) = a/2 and p(b) = b/2 L.H.S = a/2 −b/2 →1/2(a −b) R.H.S = (a −b)/2 so L.H.S = R.H.S which is the correct answer. answer:B Correct Option: B Figure 1: Examples of solved math problems. from the dataset. Second, we propose a sequence to sequence model that generates a sequence of instructions that, when executed, generates the rationale; only after this is the answer chosen (§3). Since the target program is not given in the training data (most obviously, its specific form will depend on the operations that are supported by the program interpreter); the third contribution is thus a technique for inferring programs that generate a rationale and, ultimately, the answer. Even constrained by a text rationale, the search space of possible programs is quite large, and we employ a heuristic search to find plausible next steps to guide the search for programs (§4). Empirically, we are able to show that state-of-the-art sequence to sequence models are unable to perform above chance on this task, but that our model doubles the accuracy of the baseline (§6). 2 Dataset We built a dataset with 100,000 problems with the annotations shown in Figure 1. Each question is decomposed in four parts, two inputs and two outputs: the description of the problem, which we will denote as the question, and the possible (multiple choice) answer options, denoted as options. Our goal is to generate the description of the rationale used to reach the correct answer, denoted as rationale and the correct option label. Problem 1 illustrates an example of an algebra problem, which must be translated into an expression (i.e., (27x + 17y)/(x + y) = 23) and then the desired quantity (x/y) solved for. Problem 2 is an example that could be solved by multi-step arithmetic operations proposed in (Roy and Roth, 2015). Finally, Problem 3 describes a problem that is solved by testing each of the options, which has not been addressed in the past. 2.1 Construction We first collect a set of 34,202 seed problems that consist of multiple option math questions covering a broad range of topics and difficulty levels. Examples of exams with such problems include the GMAT (Graduate Management Admission Test) and GRE (General Test). Many websites contain example math questions in such exams, where the answer is supported by a rationale. Next, we turned to crowdsourcing to generate new questions. We create a task where users are presented with a set of 5 questions from our seed dataset. Then, we ask the Turker to choose one of the questions and write a similar question. We also force the answers and rationale to differ from the original question in order to avoid paraphrases of the original question. Once again, we manually check a subset of the jobs for each Turker for quality control. The type of questions generated using this method vary. Some turkers propose small changes in the values of the questions (e.g., changing the equality p(a)p(b) = p(ab) in Problem 3 to a different equality is a valid question, as long as the rationale and options are rewritten to reflect the change). We designate these as replica problems as the natural language used in the question and rationales tend to be only minimally unaltered. Others propose new problems in the same topic where the generated questions tend to differ more radically from existing ones. Some Turkers also copy math problems available on the web, and we 159 Question Rationale Training Examples 100,949 Dev Examples 250 Test Examples 250 Numeric Average Length 9.6 16.6 Vocab Size 21,009 14,745 Non-Numeric Average Length 67.8 89.1 Vocab Size 17,849 25,034 All Average Length 77.4 105.7 Vocab Size 38,858 39,779 Table 1: Descriptive statistics of our dataset. define in the instructions that this is not allowed, as it will generate multiple copies of the same problem in the dataset if two or more Turkers copy from the same resource. These Turkers can be detected by checking the nearest neighbours within the collected datasets as problems obtained from online resources are frequently submitted by more than one Turker. Using this method, we obtained 70,318 additional questions. 2.2 Statistics Descriptive statistics of the dataset is shown in Figure 1. In total, we collected 104,519 problems (34,202 seed problems and 70,318 crowdsourced problems). We removed 500 problems as heldout set (250 for development and 250 for testing). As replicas of the heldout problems may be present in the training set, these were removed manually by listing for each heldout instance the closest problems in the training set in terms of character-based Levenstein distance. After filtering, 100,949 problems remained in the training set. We also show the average number of tokens (total number of tokens in the question, options and rationale) and the vocabulary size of the questions and rationales. Finally, we provide the same statistics exclusively for tokens that are numeric values and tokens that are not. Figure 2 shows the distribution of examples based on the total number of tokens. We can see that most examples consist of 30 to 500 tokens, but there are also extremely long examples with more than 1000 tokens in our dataset. 3 Model Generating rationales for math problems is challenging as it requires models that learn to perform math operations at a finer granularity as each step within the solution must be explained. For instance, in Problem 1, the equation (27x + 0 200 400 600 800 1000 0 1000 2000 3000 frequency length Figure 2: Distribution of examples per length. 17y)/(x + y) = 23 must be solved to obtain the answer. In previous work (Kushman et al., 2014), this could be done by feeding the equation into an expression solver to obtain x/y = 3/2. However, this would skip the intermediate steps 27x+17y = 23x+23y and 4x = 6y, which must also be generated in our problem. We propose a model that jointly learns to generate the text in the rationale, and to perform the math operations required to solve the problem. This is done by generating a program, containing both instructions that generate output and instructions that simply generate intermediate values used by following instructions. 3.1 Problem Definition In traditional sequence to sequence models (Sutskever et al., 2014; Bahdanau et al., 2014), the goal is to predict the output sequence y = y1, . . . , y|y| from the input sequence x = x1, . . . , x|x|, with lengths |y| and |x|. In our particular problem, we are given the problem and the set of options, and wish to predict the rationale and the correct option. We set x as the sequence of words in the problem, concatenated with words in each of the options separated by a special tag. Note that knowledge about the possible options is required as some problems are solved by the process of elimination or by testing each of the options (e.g. Problem 3). We wish to generate y, which is the sequence of words in the rationale. We also append the correct option as the last word in y, which is interpreted as the chosen option. For example, y in Problem 1 is “Let the . . . = 3/2 . ⟨EOR⟩B ⟨EOS⟩”, whereas in Problem 2 it is “Let s be . . . Answer is C ⟨EOR⟩C ⟨EOS⟩”, where “⟨EOS⟩” is the end of sentence symbol and “⟨EOR⟩” is the end of rationale symbol. 160 i x z v r 1 From Id(“Let”) Let y1 2 a Id(“s”) s y2 3 pack Id(“be”) be y3 4 of Id(“the”) the y4 5 52 Id(“sample”) sample y5 6 cards Id(“space”) space y6 7 , Id(“.”) . y7 8 two Id(“\n”) \n y8 9 cards Id(“Then”) Then y9 10 are Id(“n”) n y10 11 drawn Id(“(”) ( y11 12 together Id(“s”) s y12 13 at Id(“)”) ) y13 14 random Id(“=”) = y14 15 . Str to Float(x5) 52 m1 16 What Float to Str(m1) 52 y15 17 is Id(“C”) C y16 18 the Id(“2”) 2 y17 19 probability Id(“=”) = y18 20 of Str to Float(y17) 2 m2 21 both Choose(m1,m2) 1326 m3 22 cards Float to Str(m3) 1326 y19 23 being Id(“E”) E y20 24 kings Id(“=”) = y21 25 ? Id(“event”) event y22 26 <O> Id(“of”) of y23 27 A) Id(“getting”) getting y24 28 2/1223 Id(“2”) 2 y25 29 <O> Id(“kings”) kings y26 30 B) Id(“out”) out y27 31 1/122 Id(“of”) of y28 . . . . . . ... ... . . . |z| Id(“⟨EOS⟩”) ⟨EOS⟩ y|y| Table 2: Example of a program z that would generate the output y. In v, italics indicates string types; bold indicates float types. Refer to §3.3 for description of variable names. 3.2 Generating Programs to Generate Rationales We wish to generate a latent sequence of program instructions, z = z1, . . . , z|z|, with length |z|, that will generate y when executed. We express z as a program that can access x, y, and the memory buffer m. Upon finishing execution we expect that the sequence of output tokens to be placed in the output vector y. Table 2 illustrates an example of a sequence of instructions that would generate an excerpt from Problem 2, where columns x, z, v, and r denote the input sequence, the instruction sequence (program), the values of executing the instruction, and where each value vi is written (i.e., either to the output or to the memory). In this example, instructions from indexes 1 to 14 simply fill each position with the observed output y1, . . . , y14 with a string, where the Id operation simply returns its parameter without applying any operation. As such, running this operation is analogous to generating a word by sampling from a softmax over a vocabulary. However, instruction z15 reads the input word x5, 52, and applies the operation Str to Float, which converts the word 52 into a floating point number, and the same is done for instruction z20, which reads a previously generated output word y17. Unlike, instructions z1, . . . , z14, these operations write to the external memory m, which stores intermediate values. A more sophisticated instruction—which shows some of the power of our model—is z21 = Choose(m1, m2) →m3 which evaluates m1 m2  and stores the result in m3. This process repeats until the model generates the end-of-sentence symbol. The last token of the program as said previously must generate the correct option value, from “A” to “E”. By training a model to generate instructions that can manipulate existing tokens, the model benefits from the additional expressiveness needed to solve math problems within the generation process. In total we define 22 different operations, 13 of which are frequently used operations when solving math problems. These are: Id, Add, Subtract, Multiply, Divide, Power, Log, Sqrt, Sine, Cosine, Tangent, Factorial, and Choose (number of combinations). We also provide 2 operations to convert between Radians and Degrees, as these are needed for the sine, cosine and tangent operations. There are 6 operations that convert floating point numbers into strings and vice-versa. These include the Str to Float and Float to Str operations described previously, as well as operations which convert between floating point numbers and fractions, since in many math problems the answers are in the form “3/4”. For the same reason, an operation to convert between a floating point number and number grouped in thousands is also used (e.g. 1000000 to “1,000,000” or “1.000.000”). Finally, we define an operation (Check) that given the input string, searches through the list of options and returns a string with the option index in {“A”, “B”, “C”, “D”, “E”}. If the input value does not match any of the options, or more than one option contains that value, it cannot be applied. For instance, in Problem 2, once the correct probability “1/221” is generated, by applying the check operation to this number we can obtain correct option “C”. 161 hi softmax oi ri softmax ri qi,j=1 softmax qij softmax copy input aij qi,j+1 copy output hi+1 j < argc(oi)? vi execute Figure 3: Illustration of the generation process of a single instruction tuple at timestamp i. 3.3 Generating and Executing Instructions In our model, programs consist of sequences of instructions, z. We turn now to how we model each zi, conditional on the text program specification, and the program’s history. The instruction zi is a tuple consisting of an operation (oi), an ordered sequence of its arguments (ai), and a decision about where its results will be placed (ri) (is it appended in the output y or in a memory buffer m?), and the result of applying the operation to its arguments (vi). That is, zi = (oi, ri, ai, vi). Formally, oi is an element of the pre-specified set of operations O, which contains, for example add, div, Str to Float, etc. The number of arguments required by oi is given by argc(oi), e.g., argc(add) = 2 and argc(log) = 1. The arguments are ai = ai,1, . . . , ai,argc(oi). An instruction will generate a return value vi upon execution, which will either be placed in the output y or hidden. This decision is controlled by ri. We define the instruction probability as: p(oi, ai, ri,vi | z<i, x, y, m) = p(oi | z<i, x) × p(ri | z<i, x, oi)× argc(oi) Y j=1 p(ai,j | z<i, x, oi, m, y)× [vi = apply(oi, a)], where [p] evaluates to 1 if p is true and 0 otherwise, and apply(f, x) evaluates the operation f with arguments x. Note that the apply function is not learned, but pre-defined. The network used to generate an instruction at a given timestamp i is illustrated in Figure 3. We first use the recurrent state hi to generate p(oi | z<i, x) = softmax oi∈O (hi), using a softmax over the set of available operations O. In order to predict ri, we generate a new hidden state ri, which is a function of the current program context hi, and an embedding of the current predicted operation, oi. As the output can either be placed in the memory m or the output y, we compute the probability p(ri = OUTPUT | z<i, x, oi) = σ(ri · wr + br), where σ is the logistic sigmoid function. If ri = OUTPUT, vi is appended to the output y; otherwise it is appended to the memory m. Once we generate ri, we must predict ai, the argc(oi)-length sequence of arguments that operation oi requires. The jth argument ai,j can be either generated from a softmax over the vocabulary, copied from the input vector x, or copied from previously generated values in the output y or memory m. This decision is modeled using a latent predictor network (Ling et al., 2016), where the control over which method used to generate ai,j is governed by a latent variable qi,j ∈ {SOFTMAX, COPY-INPUT, COPY-OUTPUT}. Similar to when predicting ri, in order to make this choice, we also generate a new hidden state for each argument slot j, denoted by qi,j with an LSTM, feeding the previous argument in at each time step, and initializing it with ri and by reading the predicted value of the output ri. • If qi,j = SOFTMAX, ai,j is generated by sampling from a softmax over the vocabulary Y, p(ai,j | qi,j = SOFTMAX) = softmax ai,j∈Y (qi,j). This corresponds to a case where a string is used as argument (e.g. y1=“Let”). • If qi,j = COPY-INPUT, ai,j is obtained by copying an element from the input vector with a pointer network (Vinyals et al., 2015) over input words x1, . . . , x|x|, represented by their encoder LSTM state u1, . . . , u|x|. As such, we compute the probability distribution over input words as: p(ai,j | qi,j =COPY-INPUT) = (1) softmax ai,j∈x1,...,x|x| f(uai,j, qi,j)  Function f computes the affinity of each token xai,j and the current output context qi,j. A common implementation of f, which we follow, is to apply a linear projection from [uai,j; qi,j] into a fixed size vector (where [u; v] is vector concatenation), followed by a tanh and a linear projection into a single value. 162 • If qi,j = COPY-OUTPUT, the model copies from either the output y or the memory m. This is equivalent to finding the instruction zi, where the value was generated. Once again, we define a pointer network that points to the output instructions and define the distribution over previously generated instructions as: p(ai,j | qi,j =COPY-OUTPUT) = softmax ai,j∈z1,...,zi−1 f(hai,j, qi,j)  Here, the affinity is computed using the decoder state hai,j and the current state qi,j. Finally, we embed the argument ai,j1 and the state qi,j to generate the next state qi,j+1. Once all arguments for oi are generated, the operation is executed to obtain vi. Then, the embedding of vi, the final state of the instruction qi,|ai| and the previous state hi are used to generate the state at the next timestamp hi+1. 4 Inducing Programs while Learning The set of instructions z that will generate y is unobserved. Thus, given x we optimize the marginal probability function: p(y | x) = X z∈Z p(y | z)p(z | x) = X z∈Z(y) p(z | x), where p(y | z) is the Kronecker delta function δe(z),y, which is 1 if the execution of z, denoted as e(z), generates y and 0 otherwise. Thus, we can redefine p(y|x), the marginal over all programs Z, as a marginal over programs that would generate y, defined as Z(y). As marginalizing over z ∈ Z(y) is intractable, we approximate the marginal by generating samples from our model. Denote the set of samples that are generated by ˆZ(y). We maximize P z ∈ˆZ(y)p(z|x). However, generating programs that generate y is not trivial, as randomly sampling from the RNN distribution over instructions at each timestamp is unlikely to generate a sequence z ∈Z(y). This is analogous to the question answering work in Liang et al. (2016), where the query that 1 The embeddings of a given argument ai,j and the return value vi are obtained with a lookup table embedding and two flags indicating whether it is a string and whether it is a float. Furthermore, if the the value is a float we also add its numeric value as a feature. generates the correct answer must be found during inference, and training proved to be difficult without supervision. In Roy and Roth (2015) this problem is also addressed by adding prior knowledge to constrain the exponential space. In our work, we leverage the fact that we are generating rationales, where there is a sense of progression within the rationale. That is, we assume that the rationale solves the problem step by step. For instance, in Problem 2, the rationale first describes the number of combinations of two cards in a deck of 52 cards, then describes the number of combinations of two kings, and finally computes the probability of drawing two kings. Thus, while generating the final answer without the rationale requires a long sequence of latent instructions, generating each of the tokens of the rationale requires far less operations. More formally, given the sequence z1, . . . , zi−1 generated so far, and the possible values for zi given by the network, denoted Zi, we wish to filter Zi to Zi(yk), which denotes a set of possible options that contain at least one path capable of generating the next token at index k. Finding the set Zi(yk) is achieved by testing all combinations of instructions that are possible with at most one level of indirection, and keeping those that can generate yk. This means that the model can only generate one intermediate value in memory (not including the operations that convert strings into floating point values and vice-versa). Decoding. During decoding we find the most likely sequence of instructions z given x, which can be performed with a stack-based decoder. However, it is important to refer that each generated instruction zi = (oi, ri, ai,1, . . . , ai,|ai|, vi) must be executed to obtain vi. To avoid generating unexecutable code—e.g., log(0)—each hypothesis instruction is executed and removed if an error occurs. Finally, once the “⟨EOR⟩” tag is generated, we only allow instructions that would generate one of the option “A” to “E” to be generated, which guarantees that one of the options is chosen. 5 Staged Back-propagation As it is shown in Figure 2, math rationales with more than 200 tokens are not uncommon, and with additional intermediate instructions, the size z can easily exceed 400. This poses a practical challenge for training the model. For both the attention and copy mechanisms, 163 for each instruction zi, the model needs to compute the probability distribution between all the attendable units c conditioned on the previous state hi−1. For the attention model and input copy mechanisms, c = x0,i−1 and for the output copy mechanism c = z. These operations generally involve an exponential number of matrix multiplications as the size of c and z grows. For instance, during the computation of the probabilities for the input copy mechanism in Equation 1, the affinity function f between the current context q and a given input uk is generally implemented by projecting u and q into a single vector followed by a non-linearity, which is projected into a single affinity value. Thus, for each possible input u, 3 matrix multiplications must be performed. Furthermore, for RNN unrolling, parameters and intermediate outputs for these operations must be replicated for each timestamp. Thus, as z becomes larger the attention and copy mechanisms quickly become a memory bottleneck as the computation graph becomes too large to fit on the GPU. In contrast, the sequence-to-sequence model proposed in (Sutskever et al., 2014), does not suffer from these issues as each timestamp is dependent only on the previous state hi−1. To deal with this, we use a training method we call staged back-propagation which saves memory by considering slices of K tokens in z, rather than the full sequence. That is, to train on a minibatch where |z| = 300 with K = 100, we would actually train on 3 mini-batches, where the first batch would optimize for the first z1:100, the second for z101:200 and the third for z201:300. The advantage of this method is that memory intensive operations, such as attention and the copy mechanism, only need to be unrolled for K steps, and K can be adjusted so that the computation graph fits in memory. However, unlike truncated back-propagation for language modeling, where context outside the scope of K is ignored, sequence-to-sequence models require global context. Thus, the sequence of states h is still built for the whole sequence z. Afterwards, we obtain a slice hj:j+K, and compute the attention vector.2 Finally, the prediction of the instruction is conditioned on the LSTM state and the attention vector. 2This modeling strategy is sometimes known as late fusion, as the attention vector is not used for state propagation, it is incorporated “later”. 6 Experiments We apply our model to the task of generating rationales for solutions to math problems, evaluating it on both the quality of the rationale and the ability of the model to obtain correct answers. 6.1 Baselines As the baseline we use the attention-based sequence to sequence model proposed by Bahdanau et al. (2014), and proposed augmentations, allowing it to copy from the input (Ling et al., 2016) and from the output (Merity et al., 2016). 6.2 Hyperparameters We used a two-layer LSTM with a hidden size of H = 200, and word embeddings with size 200. The number of levels that the graph G is expanded during sampling D is set to 5. Decoding is performed with a beam of 200. As for the vocabulary of the softmax and embeddings, we keep the most frequent 20,000 word types, and replace the rest of the words with an unknown token. During training, the model only learns to predict a word as an unknown token, when there is no other alternative to generate the word. 6.3 Evaluation Metrics The evaluation of the rationales is performed with average sentence level perplexity and BLEU4 (Papineni et al., 2002). When a model cannot generate a token for perplexity computation, we predict unknown token. This benefits the baselines as they are less expressive. As the perplexity of our model is dependent on the latent program that is generated, we force decode our model to generate the rationale, while maximizing the probability of the program. This is analogous to the method used to obtain sample programs described in Section 4, but we choose the most likely instructions at each timestamp instead of sampling. Finally, the correctness of the answer is evaluated by computing the percentage of the questions, where the chosen option matches the correct one. 6.4 Results The test set results, evaluated on perplexity, BLEU, and accuracy, are presented in Table 3. Perplexity. In terms of perplexity, we observe that the regular sequence to sequence model fares poorly on this dataset, as the model requires the generation of many values that tend to be 164 Model Perplexity BLEU Accuracy Seq2Seq 524.7 8.57 20.8 +Copy Input 46.8 21.3 20.4 +Copy Output 45.9 20.6 20.2 Our Model 28.5 27.2 36.4 Table 3: Results over the test set measured in Perplexity, BLEU and Accuracy. sparse. Adding an input copy mechanism greatly improves the perplexity as it allows the generation process to use values that were mentioned in the question. The output copying mechanism improves perplexity slightly over the input copy mechanism, as many values are repeated after their first occurrence. For instance, in Problem 2, the value “1326” is used twice, so even though the model cannot generate it easily in the first occurrence, the second one can simply be generated by copying the first one. We can observe that our model yields significant improvements over the baselines, demonstrating that the ability to generate new values by algebraic manipulation is essential in this task. An example of a program that is inferred is shown in Figure 4. The graph was generated by finding the most likely program z that generates y. Each node isolates a value in x, m, or y, where arrows indicate an operation executed with the outgoing nodes as arguments and incoming node as the return of the operation. For simplicity, operations that copy or convert values (e.g. from string to float) were not included, but nodes that were copied/converted share the same color. Examples of tokens where our model can obtain the perplexity reduction are the values “0.025”, “0.023”, “0.002” and finally the answer “E” , as these cannot be copied from the input or output. BLEU. We observe that the regular sequence to sequence model achieves a low BLEU score. In fact, due to the high perplexities the model generates very short rationales, which frequently consist of segments similar to “Answer should be D”, as most rationales end with similar statements. By applying the copy mechanism the BLEU score improves substantially, as the model can define the variables that are used in the rationale. Interestingly, the output copy mechanism adds no further improvement in the perplexity evaluation. This is because during decoding all values that can be copied from the output are values that could have been generated by the model either from the softmax or the input copy mechanism. As such, adding an output copying mechanism adds little to the expressiveness of the model during decoding. Finally, our model can achieve the highest BLEU score as it has the mechanism to generate the intermediate and final values in the rationale. Accuracy. In terms of accuracy, we see that all baseline models obtain values close to chance (20%), indicating that they are completely unable to solve the problem. In contrast, we see that our model can solve problems at a rate that is significantly higher than chance, demonstrating the value of our program-driven approach, and its ability to learn to generate programs. In general, the problems we solve correctly correspond to simple problems that can be solved in one or two operations. Examples include questions such as “Billy cut up each cake into 10 slices, and ended up with 120 slices altogether. How many cakes did she cut up? A) 9 B) 7 C) 12 D) 14 E) 16”, which can be solved in a single step. In this case, our model predicts “120 / 10 = 12 cakes. Answer is C” as the rationale, which is reasonable. 6.5 Discussion. While we show that our model can outperform the models built up to date, generating complex rationales as those shown in Figure 1 correctly is still an unsolved problem, as each additional step adds complexity to the problem both during inference and decoding. Yet, this is the first result showing that it is possible to solve math problems in such a manner, and we believe this modeling approach and dataset will drive work on this problem. 7 Related Work Extensive efforts have been made in the domain of math problem solving (Hosseini et al., 2014; Kushman et al., 2014; Roy and Roth, 2015), which aim at obtaining the correct answer to a given math problem. Other work has focused on learning to map math expressions into formal languages (Roy et al., 2016). We aim to generate natural language rationales, where the bindings between variables and the problem solving approach are mixed into a single generative model that attempts to solve the problem while explaining the approach taken. Our approach is strongly tied with the work on sequence to sequence transduction using the encoder-decoder paradigm (Sutskever et al., 2014; 165 Bottle R contains 250 capsules and costs $ 6.25 . Bottle T contains 130 capsules and costs $ 2.99 . What is the difference between the cost per capsule for bottle R and the cost per capsule for bottle T ? (A) $ 0.25 (B) $ 0.12 (C) $ 0.05 (D) $ 0.03 (E) $ 0.002 Cost per capsule in R is 6.25 / 250 = 0.025 Cost per capsule in T is 2.99 / 130 = 0.023 The difference is 0.002 The answer is E 250 6.25 0.025 2.99 130 0.023 0.002 E div(m1,m2) div(m4,m5) sub(m6,m3) check(m7) \n \n \n <EOS> y m x Figure 4: Illustration of the most likely latent program inferred by our algorithm to explain a held-out question-rationale pair. Bahdanau et al., 2014; Kalchbrenner and Blunsom, 2013), and inherits ideas from the extensive literature on semantic parsing (Jones et al., 2012; Berant et al., 2013; Andreas et al., 2013; Quirk et al., 2015; Liang et al., 2016; Neelakantan et al., 2016) and program generation (Reed and de Freitas, 2016; Graves et al., 2016), namely, the usage of an external memory, the application of different operators over values in the memory and the copying of stored values into the output sequence. Providing textual explanations for classification decisions has begun to receive attention, as part of increased interest in creating models whose decisions can be interpreted. Lei et al. (2016), jointly modeled both a classification decision, and the selection of the most relevant subsection of a document for making the classification decision. Hendricks et al. (2016) generate textual explanations for visual classification problems, but in contrast to our model, they first generate an answer, and then, conditional on the answer, generate an explanation. This effectively creates a post-hoc justification for a classification decision rather than a program for deducing an answer. These papers, like ours, have jointly modeled rationales and answer predictions; however, we are the first to use rationales to guide program induction. 8 Conclusion In this work, we addressed the problem of generating rationales for math problems, where the task is to not only obtain the correct answer of the problem, but also generate a description of the method used to solve the problem. To this end, we collect 100,000 question and rationale pairs, and propose a model that can generate natural language and perform arithmetic operations in the same decoding process. Experiments show that our method outperforms existing neural models, in both the fluency of the rationales that are generated and the ability to solve the problem. References Jacob Andreas, Andreas Vlachos, and Stephen Clark. 2013. Semantic parsing as machine translation. In Proc. of ACL. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben166 gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv 1409.0473. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proc. of EMNLP. Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka GrabskaBarwiska, Sergio Gmez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, Adri Puigdomnech Badia, Karl Moritz Hermann, Yori Zwols, Georg Ostrovski, Adam Cain, Helen King, Christopher Summerfield, Phil Blunsom, Koray Kavukcuoglu, and Demis Hassabis. 2016. Hybrid computing using a neural network with dynamic external memory. Nature 538(7626):471– 476. Brent Harrison, Upol Ehsan, and Mark O. Riedl. 2017. Rationalization: A neural machine translation approach to generating natural language explanations. CoRR abs/1702.07826. Lisa Anne Hendricks, Zeynep Akata, Marcus Rohrbach, Jeff Donahue, Bernt Schiele, and Trevor Darrell. 2016. Generating visual explanations. In Proc. ECCV. Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. 2014. Learning to solve arithmetic word problems with verb categorization. In Proc. of EMNLP. Bevan Keeley Jones, Mark Johnson, and Sharon Goldwater. 2012. Semantic parsing with bayesian tree transducers. In Proc. of ACL. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proc. of EMNLP. Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. 2014. Learning to automatically solve algebra word problems. In Proc. of ACL. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In Proc. of EMNLP. Chen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, and Ni Lao. 2016. Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. arXiv 1611.00020. Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tom´as Kocisk´y, Andrew Senior, Fumin Wang, and Phil Blunsom. 2016. Latent predictor networks for code generation. In Proc. of ACL. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models. arXiv 1609.07843. Arvind Neelakantan, Quoc V. Le, and Ilya Sutskever. 2016. Neural programmer: Inducing latent programs with gradient descent. In Proc. ICLR. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In Proc. of ACL. Chris Quirk, Raymond Mooney, and Michel Galley. 2015. Language to code: Learning semantic parsers for if-this-then-that recipes. In Proc. of ACL. Scott E. Reed and Nando de Freitas. 2016. Neural programmer-interpreters. In Proc. of ICLR. Subhro Roy and Dan Roth. 2015. Solving general arithmetic word problems. In Proc. of EMNLP. Subhro Roy, Shyam Upadhyay, and Dan Roth. 2016. Equation parsing: Mapping sentences to grounded equations. In Proc. of EMNLP. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. arXiv 1409.3215. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Proc. of NIPS. 167
2017
15
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1634–1644 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1150 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1634–1644 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1150 Interactive Learning of Grounded Verb Semantics towards Human-Robot Communication Lanbo She and Joyce Y. Chai Department of Computer Science and Engineering Michigan State University East Lansing, Michigan 48824, USA {shelanbo, jchai}@cse.msu.edu Abstract To enable human-robot communication and collaboration, previous works represent grounded verb semantics as the potential change of state to the physical world caused by these verbs. Grounded verb semantics are acquired mainly based on the parallel data of the use of a verb phrase and its corresponding sequences of primitive actions demonstrated by humans. The rich interaction between teachers and students that is considered important in learning new skills has not yet been explored. To address this limitation, this paper presents a new interactive learning approach that allows robots to proactively engage in interaction with human partners by asking good questions to learn models for grounded verb semantics. The proposed approach uses reinforcement learning to allow the robot to acquire an optimal policy for its question-asking behaviors by maximizing the long-term reward. Our empirical results have shown that the interactive learning approach leads to more reliable models for grounded verb semantics, especially in the noisy environment which is full of uncertainties. Compared to previous work, the models acquired from interactive learning result in a 48% to 145% performance gain when applied in new situations. 1 Introduction In communication with cognitive robots, one of the challenges is that robots do not have sufficient linguistic or world knowledge as humans do. For example, if a human asks a robot to boil the water but the robot has no knowledge what this verb phrase means and how this verb phrase relates to its own actuator, the robot will not be able to execute this command. Thus it is important for robots to continuously learn the meanings of new verbs and how the verbs are grounded to its underlying action representations from its human partners. To support learning of grounded verb semantics, previous works (She et al., 2014; Misra et al., 2015; She and Chai, 2016) rely on multiple instances of human demonstrations of corresponding actions. From these demonstrations, robots capture the state change of the environment caused by the actions and represent verb semantics as the desired goal state. One advantage of such state-based representation is that, when robots encounter the same verbs/commands in a new situation, the desired goal state will trigger the action planner to automatically plan a sequence of primitive actions to execute the command. While the state-based verb semantics provides an important link to connect verbs to the robot’s actuator, previous works also present several limitations. First of all, previous approaches were developed under the assumption of perfect perception of the environment (She et al., 2014; Misra et al., 2015; She and Chai, 2016). However, this assumption does not hold in real-world situated interaction. The robot’s representation of the environment is often incomplete and error-prone due to its limited sensing capabilities. Thus it is not clear whether previous approaches can scale up to handle noisy and incomplete environment. Second, most previous works rely on multiple demonstration examples to acquire grounded verb models. Each demonstration is simply a sequence of primitive actions associated with a verb. No other type of interaction between humans and robots is explored. Previous cognitive studies (Bransford et al., 2000) on how people learn have shown that social interaction (e.g., conver1634 sation with teachers) can enhance student learning experience and improve learning outcomes. For robotic learning, previous work (Cakmak and Thomaz, 2012) has also demonstrated the necessity of question answering in the learning process. Thus, in our view, interactive learning beyond demonstration of primitive actions should play a vital role in the robot’s acquisition of more reliable models of grounded verb semantics. This is especially important because the robot’s perception of the world is noisy and incomplete, human language can be ambiguous, and the robot may lack the relevant linguistic or world knowledge during the learning process. To address these limitations, we have developed a new interactive learning approach where robots actively engage with humans to acquire models of grounded verb semantics. Our approach explores the space of interactive question answering between humans and robots during the learning process. In particular, motivated by previous work on robot learning (Cakmak and Thomaz, 2012), we designed a set of questions that are pertinent to verb semantic representations. We further applied reinforcement learning to learn an optimal policy that guides the robot in deciding when to ask what questions. Our empirical results have shown that this interactive learning process leads to more reliable representations of grounded verb semantics, which contribute to significantly better action performance in new situations. When the environment is noisy and uncertain (as in a realistic situation), the models acquired from interactive learning result in a performance gain between 48% and 145% when applied in new situations. Our results further demonstrate that the interaction policy acquired from reinforcement learning leads to the most efficient interaction and the most reliable verb models. 2 Related Work To enable human-robot communication and collaboration, recent years have seen an increasing amount of works which aim to learn semantics of language that are grounded to agents’ perception (Gorniak and Roy, 2007; Tellex et al., 2014; Kim and Mooney, 2012; Matuszek et al., 2012a; Liu et al., 2014; Liu and Chai, 2015; Thomason et al., 2015, 2016; Yang et al., 2016; Gao et al., 2016) and action (Matuszek et al., 2012b; Artzi and Zettlemoyer, 2013; She et al., 2014; Misra et al., 2014, 2015; She and Chai, 2016). Specifically for verb semantics, recent works explored the connection between verbs and action planning (She et al., 2014; Misra et al., 2014, 2015; She and Chai, 2016), for example, by representing grounded verbs semantics as the desired goal state of the physical world that is a result of the corresponding actions. Such representations are learned based on example actions demonstrated by humans. Once acquired, these representations will allow agents to interpret verbs/commands issued by humans in new situations and apply action planning to execute actions. Given its clear advantage in connecting verbs with actions, our work also applies the state-based representation for verb semantics. However, we have developed a new approach which goes beyond learning from demonstrated examples by exploring how rich interaction between humans and agents can be used to acquire models for grounded verb semantics. This approach was motivated by previous cognitive studies (Bransford et al., 2000) on how people learn as well as recent findings on robot skill learning (Cakmak and Thomaz, 2012). One of the principles for human learning is that “learning is enhanced through socially supported interactions”. Studies have shown that social interaction with teachers and peers (e.g., substantive conversation) can enhance student learning experience and improve learning outcomes. In recent work on interactive robot learning of new skills (Cakmak and Thomaz, 2012), researchers identified three types of questions that can be used by a human/robot student to enhance learning outcomes: 1) demonstration query (i.e., asking for a full or partial demonstration of the task), 2) label query (i.e., asking whether an execution is correct), and 3) feature query (i.e., asking for a specific feature or aspect of the task). Inspired by these previous findings, our work explores interactive learning to acquire grounded verb semantics. In particular, we aim to address when to ask what questions during interaction to improve learning. 3 Acquisition of Grounded Verb Semantics This section gives a brief review on acquisition of grounded verb semantics and illustrates the differences between previous approaches and our approach using interactive learning. 1635 Figure 1: An example of acquiring state-based representation for verb semantics based on an initial environment Ei, and a language command Li, the primitive action sequence −→ Ai demonstrated by the human, and the final environment E′ i that results from the execution of −→ Ai in Ei. 3.1 State-based Representation As shown in Figure 1, the verb semantics (e.g., boil(x)) is represented by the goal state (e.g., Status(x, TempHigh)) which is the result of the demonstrated primitive actions. Given the verb phrase boil the water (i.e., Li), the human teaches the robot how to accomplish the corresponding action based on a sequence of primitive actions −→ Ai. By comparing the final environment E′ i with the initial environment Ei, the robot is able to identify the state change of the environment, which becomes a hypothesis of goal state to represent verb semantics. Compared to procedure-based representations, the state-based representation supports automated planning at the execution time. It is environment-independent and more generalizable. In (She and Chai, 2016), instead of one hypothesis, it maintains a specific-to-general hypothesis space as shown in Figure 2 to capture all goal hypotheses of a particular verb frame. Specifically, it assumes that one verb frame may lead to different outcomes under different environments, where each possible outcome is represented by one node in the hierarchical graph and each node is a conjunction of multiple atomic fluents. 1 Given a language command (i.e., a verb phrase), a robot will engage in the following processes: • Execution. In this process, the robot will select a hypothesis from the space of hypotheses that is most relevant to the current situation and use the corresponding goal state to plan for actions to execute. • Learning. When the robot fails to select a hypothesis or fails to execute the action, it will ask the human for a demonstration. 1In this work, we assume the set of atomic fluents representing environment state are given and do not address the question of whether these predicates are adequate to represent a domain. Figure 2: An example hypothesis space for the verb frame fill(x, y). Based on the demonstrated actions, the robot will learn a new representation (i.e., new nodes) and update the hypothesis space. 3.2 Noisy Environment Figure 3: An example probabilistic sensing result. Previous works represent the environment Ei as a conjunction of grounded state fluents. Each fluent consists of a predicate and one or more arguments (i.e., objects in the physical world, or object status), representing one aspect of the perceived environment. An example of a fluent is “Has(Kettle1, WATER)” meaning object Kettle1 has some water inside, where Has is the predicate, and Kettle1 and WATER are arguments. The set of fluents include the status of the robot (e.g., Grasping(Kettle1)), the status of different objects (e.g., Status(WATER, TempHigh)), and relations between objects (e.g., On(Kettle1, Stove)). One limitation of the previous works is that the envi1636 Figure 4: A general framework of robot interactive learning. KB stands for knowledge base, θE stands for Interaction Strategy for Execution, and θD stands for Interaction Strategy for Learning. Only demonstration(She and Chai, 2016) Interactive learning (this work) Execution Example Execution Example H1: Boil the water. H1: Boil the water. R1: (select a hypothesis and execute the action accordingly) R1: I see a silver object on the table, is that a pot? H2: (provide true/false feedback on whether the execution is H2: Yes. correct) R2: Should I leave the pot on the stove after boil? H3: Yes. R3: Ok. (select a hypothesis to execute) H4: Thank you. Learning Example Learning Example H3: Boil the water. H5: Boil the water. R3: I don’t know how to boil, could you show me an example? R5: I don’t know how to boil, could you show me an example? H4: (human shows the robot how to achieve the corresponding action through a sequence of primitive actions) H6: Ok. (demonstrate the task, at the end put the pot on a heat pad) R4: (indicate the acceptance and induce a hypothesis to repR6: So after boiling something, should I put it on a heat pad? resent verb semantics) H7: No, you don’t have to. R7: I see. (induce goal state hypothesis) Table 1: Examples to show differences between learning through demonstrations as in the previous works (She and Chai, 2016) and the proposed learning from interaction. ronment has a perfect, deterministic representation, as shown in Figure 1. This is clearly not the case in the realistic physical world. In reality, given limitations of sensor capabilities, the environment representation is often partial, error prone, and full of uncertainties. Figure 3 shows an example of a more realistic representation where each fluent comes with a confidence between 0 and 1 to indicate how likely that particular fluent can be detected in the current environment. Thus, it is unclear whether the previous work is able to handle representations with uncertainties. Our interactive learning approach aims to address these uncertainties through interactive question answering with human partners. 4 Interactive Learning 4.1 Framework of Interactive Learning Figure 4 shows a general framework for interactive learning of action verbs. It aims to support a life-long learning cycle for robots, where the robot can continuously (1) engage in collaboration and communication with humans based on its existing knowledge; (2) acquire new verbs by learning from humans and experiencing the change of the world (i.e., grounded verb semantics as in this work); and (3) learn how to interact (i.e., update interaction policies). The lifelong learning cycle is composed by a sequence of interactive learning episodes (Episode 1, 2...) where each episode consists of either an execution phase or a learning phase or both. The execution phase starts with a human request for action (e.g., boil the water). According to its interaction policy, the robotic agent may choose to ask one or more questions (i.e., Q+ i ) and wait for human answers (i.e., A+ i ), or select a hypothesis from its existing knowledge base to execute the command (i.e., Execute). With the human feedback of the execution, the robot can update its interaction policy and existing knowledge. In the learning phase, the robot can initiate the learning by requesting a demonstration from the human. After the human performs the task, 1637 the robotic agent can either choose to update its knowledge if it feels confident, or it can choose to ask the human one or more questions before updating its knowledge. 4.2 Examples of Interactive Learning Table 1 illustrates the differences between the previous approach that acquires verb models based solely on demonstrations and our current work that acquires models based on interactive learning. As shown in Table 1, under the demonstration setting, humans only provide a demonstration of primitive actions and there’s no interactive question answering. In the interactive learning setting, the robot can proactively choose to ask questions regarding the uncertainties either about the environment (e.g., R1), the goal (e.g., R2), or the demonstrations (e.g., R6). Our hypothesis is that rich interactions based on question answering will allow the robot to learn more reliable models for grounded verb semantics, especially in a noisy environment. Then the question is how to manage such interaction: when to ask and what questions to ask to most efficiently acquire reliable models and apply them in execution. Next we describe the application of reinforcement learning to manage interactive question answering for both the execution phase and the learning phase. 4.3 Formulation of Interactive Learning Markov Decision Process (MDP) and its closely related Reinforcement Learning (RL) have been applied to sequential decision-making problems in dynamic domains with uncertainties, e.g., dialogue/interaction management (Singh et al., 2002; Paek and Pieraccini, 2008; Williams and Zweig, 2016), mapping language commands to actions (Branavan et al., 2009), interactive robot learning (Knox and Stone, 2011), and interactive information retrieval (Li et al., 2017). In this work, we formulate the choice of when to ask what questions during interaction as a sequential decisionmaking problem and apply reinforcement learning to acquire an optimal policy to manage interaction. Specifically, each of the execution and learning phases is governed by one policy (i.e., θE and θD), which is updated by the reinforcement learning algorithm. The use of RL intends to obtain optimal policies that can lead to the highest long-term reward by balancing the cost of interaction (e.g., the length of interaction and difficulties of questions) and the quality of the acquired models. The reinforcement formulation for both the execution phase and the learning phase are described below. State For the execution phase, each state se ∈ SE is a five tuple: se = <l, e, KB, Grd, Goal>. l is a language command, including a verb and multiple noun phrases extracted by the Stanford parser. For example, the command “Microwave the ramen” is represented as l = microwave(ramen). The environment e is a probabilistic representation of the currently perceived physical world, consisting of a set of grounded fluents and the confidence of perceiving each fluent (an example is shown in Figure 3). KB stands for the existing knowledge of verb models. Grd accounts for the agent’s current belief of object grounding: the probability of each noun in the l being grounded to different objects. Goal represents the agent’s belief of different goal state hypotheses of the current command. Within one interaction episode, command l and knowledge KB will stay the same, while e, Grd, and Goal may change accordingly due to interactive question answering and robot actions. In the execution phase, Grd and Goal are initialized with existing knowledge of learned verb models. For the learning phase, a state sd ∈SD is a four tuple: sd = <l, estart, eend, Grd>. estart and eend stands for the environment before the demonstration and after the demonstration. Action Motivated by previous studies on how humans ask questions while learning new skills (Cakmak and Thomaz, 2012), the agent’s question set includes two categories: yes/no questions and wh- questions. These questions are designed to address ambiguities in noun phrase grounding, uncertain environment sensing, and goal states. They are domain independent in nature. For example, one of the questions is np grd ynq(n, o). It is a yes/no question asking whether the noun phrase n refers to an object o (e.g., “I see a silver object, is that the pot?”). Other questions are env pred ynq(p) (i.e., whether a fluent p is present in the environment; e.g., “Is the microwave door open?”) and goal pred ynq(p) (i.e., whether a predicate p should be part of the goal; “Should the pot be on a pot stand?”). Table 2 lists all the actions available in the execution and learning phases. The select hypo action (i.e., select a goal hypothesis to execute) is only for the execution. Ideally, after asking questions, the agent should be more likely to select a goal hy1638 Action Name Explanation Question Example Reward 1. np grd whq(n) Ask for the grounding of a np. “Which is the cup, can you show me?” -6.51 2. np grd ynq(n, o) Confirm the grounding of a np. “I see a silver object, is that the pot?” -1.0 / -2.0 3. env pred ynq(p) Confirm a predicate in current environment. “Is the microwave door open?” -1.0 / -2.0 4. goal pred ynq(p) Confirm whether a predicate p should be in the final environment. “Is it true the pot should be on the counter?” -1.0 / -2.0 5. select hypo(h) Choose a hypothesis to use as goal and execute. 100 / -2.0 6. bulk np grd ynq(n, o) Confirm the grounding of multiple nps. “I think the pot is the red object and milk is in the white box, am I right?” -3.0 / -6.02 7. pred change ynq(p) Ask whether a predicate p has been changed by the action demonstration. “The pot is on a stand after the action, is that correct?” -1.0 / -2.0 8. include fluent(∧p) Include ∧p into the goal state representation. Update the verb semantic knowledge. 100 / -2.0 Table 2: The action space for reinforcement learning, where n stands for a noun phrase, o a physical object, p a fluent representation of the current state of the world, h a goal hypothesis. Action 1 and 2 are shared by both the execution and learning phases. Action 3, 4, 5 are for the execution phase, and 6, 7, 8 are only used for the learning phase. -1.0/-2.0 are typically used for yes/no questions. When the human answers the question with a “yes”, the reward is -1.0, otherwise it’s -2.0. pothesis that best describes the current situation. For the learning phase, the include fluent(∧p) action forms a goal hypothesis by conjoining a set of fluents ps where each p should have high probability of being part of the goal. Transition The transition function takes action a in state s, and gives the next state s′ according to human feedback. Note that the command l does not change during interaction. But the agent’s belief of environment e, object grounding Grd, and goal hypotheses Goal is changed according to the questions and human answers. For example, suppose the agent asks whether noun phrase n refers to the object o, if the human confirms it, the probability of n being grounded to o becomes 1.0, otherwise it will become 0.0. Reward Finding a good reward function is a hard problem in reinforcement learning. Our current approach has followed the general practice in the spoken dialogue community (Schatzmann et al., 2006; Fang et al., 2014; Su et al., 2016). The immediate robot questions are assigned small costs to favor shorter and more efficient interaction. Furthermore, motivated by how humans ask 1According to the study in (Cakmak and Thomaz, 2012), the frequency of y/n questions used by humans is about 6.5 times the frequency of open questions (wh question), which motivates our assignment of -6.5 to wh questions. 2bulk np grd ynq asks multiple object grounding all at once. This is harder to answer than asking for a single np. Therefore, its cost is assigned three times of the other yes/no questions. Algorithm 1: Policy learning. The execution and learning phases share the same learning process, but with different state s, action a spaces, and feature vectors φ. The eend is only available to the learning phase. Input : e, l (, eend); Feature function φ; Old policy θ (i.e., a weight vector) Verb Goal States Hypotheses H; Initialize : state s initialized with e, l (, eend); first action a ∼P(a|s; θ) with ϵ greedy 1 while s is not terminal do 2 Take action a, receive reward r; 3 s′ = T(s, a); 4 Choose a′ ∼P(a′|s′; θ) with ϵ greedy; δ ←r + γ · θT · φ(s′, a′) −θT · φ(s, a); 5 θ ←θ + δ · η · φ(s, a); 6 end 7 if s terminates with positive feedback then 8 Update H; 9 end Output : Updated H and θ. questions (Cakmak and Thomaz, 2012), yes/no questions are easier for a human to answer than the open questions (e.g., wh-questions) and thus are given smaller costs. A large positive reward is given at the end of interaction when the task is completed successfully. Detailed reward assignment for different actions are shown in Table 2. Learning The SARSA algorithm with linear function approximation is utilized to update policies θE and θD (Sutton and Barto, 1998). Specifically, the objective of training is to learn an optimal value function Q(s, a) (i.e., the expected cu1639 Features shared by both phases If a is a np grd whq(n). The entropy of candidate groundings of n. If n has more than 4 grounding candidates. If a is a np grd ynq(n, o). The probability of n grounded to o. Additional Features specific for the Execution phase If a is a select hypo(h) action. The probability of hypo h not satisfied in current environment. Similarity between the ns used by command l and the commands from previous experiences. Additional Features specific for the Learning phase If a is a pred change ynq(p). The probability of p been changed by demo. Table 3: Example features used by the two phases. a stands for action. Other notations are the same as used in Table 2. The“If” features are binary, and the other features are real-valued. mulative reward of taking action a in a state s). This value function is approximated by a linear function Q(s, a) = θ⊺· φ(s, a), where φ(s, a) is a feature vector and θ is a weight updated during training. Details of the algorithm is shown in Algorithm 1. During testing, the agent can take an action a that maximizes the Q value at a state s. Feature Example features used by the two phases are listed in Table 3. These features intend to capture different dimensions of information such as specific types of questions, how well noun phrases are grounded to the environment, uncertainties of the environment, and consistencies between a hypothesis and the current environment. 5 Evaluation 5.1 Experiment Setup Dataset. To evaluate our approach, we utilized the benchmark made available by (Misra et al., 2015). Individual language commands and corresponding action sequences are extracted similarly as (She and Chai, 2016). This dataset includes common tasks in the kitchen and living room domains, where each data instance comes with a language command (e.g., “boil the water”, “throw the beer into the trashcan”) and the corresponding sequence of primitive actions. In total, there are 979 instances, including 75 different verbs and 215 different noun phrases. The length of primitive action sequences range from 1 to 51 with an average of 4.82 (+/-4.8). We divided the dataset into three groups: (1) 200 data instances were used by reinforcement learning to acquire optimal interaction policies; (2) 600 data instances were used by different approaches (i.e., previous approaches and our interactive learning approach) to acquire grounded verb semantics models; and (3) 179 data instances were used as testing data to evaluate the learned verb models. The performance on applying the learned models to execute actions for the testing data is reported. To learn interaction policies, a simulated human model is created from the dataset (Schatzmann et al., 2006) to continuously interact with the robot learner3. This simulated user can answer the robot’s different types of questions and make decisions on whether the robot’s execution is correct. During policy learning, one data instance can be used multiple times. At each time, the interaction sequence is different due to exploitation and exploration in RL in selecting the next action. The RL discount factor γ is set to 0.99, the ϵ in ϵgreedy is 0.1, and the learning rate is 0.01. Noisy Environment Representation. The original data provided by (Misra et al., 2015) is based on the assumption that environment sensing is perfect and deterministic. To enable incomplete and noisy environment representation, for each fluent (e.g., grasping(Cup3), near(robot1, Cup3)) in the original data, we independently sampled a confidence value to simulate the likelihood that a particular fluent can be detected correctly from the environment. We applied the following four different variations in sampling the confidence values, which correspond to different levels of sensor reliability. (1) PerfectEnv represents the most reliable sensor. If a fluent is true in the original data, its sampled confidence is 1, and 0 otherwise. (2) NormStd3 represents a relatively reliable sensor. For each fluent in the original environment, a confidence is sampled according to a normal distribution N(1, 0.32) with an interval [0,1]. This distribution has a large probability of sampling a number larger than 0.5, meaning the corresponding fluent is still more likely to be true. (3) NormStd5 represents a less reliable sensor. The sampling distribution is N(1, 0.52), which has a larger probability of generating a number smaller than 0.5 compared to NormStd3. 3In our future work, interacting with real humans will be conducted through Amazon Mechanical Turk. And the policies acquired with a simulated user in this work will be used as initial policies. 1640 (4) UniEnv represents an unreliable sensor. Each number is sampled with a uniform distribution between 0 and 1. This means the sensor works randomly. A fluent has a equal change to be true or false no matter what the true environment is. Evaluation Metrics. We used the same evaluation metrics as in the previous works (Misra et al., 2015; She and Chai, 2016) to evaluate the performance of applying the learned models to testing instances on action planning. • IED: Instruction Editing Distance. This is a number between 0 and 1 measuring the similarity between the predicted action sequence and the ground-truth action sequence. IED equals 1 if the two sequences are exactly the same. • SJI: State Jaccard Index. This is a number between 0 and 1 measuring the similarity between the predicted and the ground-truth state changes. SJI equals 1 if action planning leads to exactly the same state change as in the ground-truth. Configurations. To understand the role of interactive learning in model acquisition and action planning, we first compared the interactive learning approach with the previous leading approach (presented as She16). To further evaluate the interaction policies acquired by reinforcement learning, we also compared the learned policy (i.e., RLPolicy) with the following two baseline policies: • RandomPolicy which randomly selects questions to ask during interaction. • ManualPolicy which continuously asks for yes/no confirmations (i.e., object grounding questions (GroundQ), environment questions (EnvQ), goal prediction questions (GoalQ)) until there’s no more questions before making a decision on model acquisition or action execution. 5.2 Results 5.2.1 The Effect of Interactive Learning Table 4 shows the performance comparison on the testing data between the previous approach She16 and our interactive learning approach based on environment representations with different levels of noise. The verb models acquired by interactive learning perform better consistently across all four She16 RL policy % improvement IED SJI IED SJI IED SJI PerfectEnv 0.430 0.426 0.453 0.468 5.3%∗ 9.9%∗ NormStd3 0.284 0.273 0.420 0.431 47.9%∗ 57.9%∗ NormStd5 0.172 0.168 0.392 0.411 127.9%∗144.6%∗ UniEnv 0.168 0.163 0.332 0.347 97.6%∗ 112.9%∗ Table 4: Performance comparison between She16 and our interactive learning based on environment representations with different levels of noise. All the improvements (marked *) are statistically significant (p < 0.01). 0 100 200 300 400 500 600 Number of data used to acquire verb models in learning phase 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 Performance in action planning (SJI) RL policy Manual policy Random policy She16 Figure 5: Performance (SJI) comparison by applying models acquired based on different interaction policies to the testing data. environment conditions. When the environment becomes noisy (i.e., NormStd3, NormStd5, and UniEnv), the performance of She16 that only relies on demonstrations decreases significantly. While the interactive learning improves the performance under the perfect environment condition, its effect in noisy environment is more remarkable. It leads to a significant performance gain between 48% and 145%. These results validate our hypothesis that interactive question answering can help to alleviate the problem of uncertainties in environment representation and goal prediction. Figure 5 shows the performance of the various learned models on the testing data, based on a varying number of training instances and different interaction policies. The interactive learning guided by the policy acquired from RL outperforms the previous approach She16. The RL policy slightly outperforms interactive learning using manually defined policy (i.e., ManualPolicy). However, as shown in the next section, the Man1641 Average number of questions Performance Learning Phase Execution Phase GroundQ EnvQ TotalQ GroundQ EnvQ GoalQ TotalQ IED SJI RLPolicy 2.130∗ 2.615∗ 4.746∗ 0.383∗ 0.650∗ 2.626 3.665∗ 0.420 0.430∗ +/-0.231 +/-0.317 +/-0.307 +/-0.137 +/-0.366 +/-0.331 +/-0.469 +/-0.015 +/-0.018 ManualPolicy 2.495 5.338 7.833 1.236 3.202 2.353 6.792 0.406 0.404 +/-0.025 +/-0.008 +/-0.025 +/-0.002 +/-0.012 +/-0.023 +/-0.025 +/-0.002 +/-0.004 RandomPolicy 0.545 0.368 0.913 0.678 0.081 0.151 0.909 0.114 0.113 +/-0.016 +/-0.033 +/-0.040 +/-0.055 +/-0.030 +/-0.024 +/-0.018 +/-0.025 +/-0.029 Table 5: Comparison between different policies including the average number (and standard deviation) of different types of questions asked during the execution phase and the learning phase respectively, and the performance on action planning for the testing data. The results are based on the noisy environment sampled by NormStd3. * indicates statistically significant difference (p < 0.05) comparing RLPolicy with ManualPolicy. ualPolicy results in much longer interaction (i.e., more questions) than the RL acquired policy. 5.2.2 Comparison of Interaction Policies Table 5 compares the performance of different interaction policies. It shows the average number of questions asked under different policies. It is not surprising the RandomPolicy has the worst performance. For the ManualPolicy, its performance is similar to the RLPolicy. However, the average interaction length of ManualPolicy is 6.792, which is much longer than the RLPolicy (which is 3.127). These results further demonstrate that the policy learned from RL enables efficient interactions and the acquisition of more reliable verb models. 6 Conclusion Robots live in a noisy environment. Due to the limitations in their external sensors, their representations of the shared environment can be error prone and full of uncertainties. As shown in previous work (Mour˜ao et al., 2012), learning action models from the noisy and incomplete observation of the world is extremely challenging. The same problem applies to the acquisition of verb semantics that are grounded to the perceived world. To address this problem, this paper presents an interactive learning approach which aims to handle uncertainties of the environment as well as incompleteness and conflicts in state representation by asking human partners intelligent questions. The interaction strategies are learned through reinforcement learning. Our empirical results have shown a significant improvement in model acquisition and action prediction. When applying the learned models in new situations, the models acquired through interactive learning leads to over 140% performance gain in noisy environment. The current investigation also has several limitations. As in previous works, we assume the world can be described by a closed set of predicates. This causes significant simplification for the physical world. One of the important questions to address in the future is how to learn new predicates through interaction with humans. Another limitation is that the current utility function is learned based on a set of pre-identified features. Future work can explore deep neural network to alleviate feature engineering. As cognitive robots start to enter our daily lives, data-driven approaches to learning may not be possible in new situations. Human partners who work side-by-side with these cognitive robots are great resources that the robots can directly learn from. Recent years have seen an increasing amount of work on task learning from human partners (Saunders et al., 2006; Chernova and Veloso, 2008; Cantrell et al., 2012; Mohan et al., 2013; Asada et al., 2009; Mohseni-Kabir et al., 2015; Nejati et al., 2006; Liu et al., 2016). Our future work will incorporate interactive learning of verb semantics with task learning to enable autonomy that can learn by communicating with humans. Acknowledgments This work was supported by the National Science Foundation (IIS-1208390 and IIS-1617682) and the DARPA SIMPLEX program under a subcontract from UCLA (N66001-15-C-4035). The authors would like to thank Dipendra K. Misra and colleagues for providing the evaluation data, and the anonymous reviewers for valuable comments. 1642 References Yoav Artzi and Luke Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics Volume1(1):49– 62. Minoru Asada, Koh Hosoda, Yasuo Kuniyoshi, Hiroshi Ishiguro, Toshio Inui, Yuichiro Yoshikawa, Masaki Ogino, and Chisato Yoshida. 2009. Cognitive developmental robotics: A survey. IEEE Transactions on Autonomous Mental Development 1(1):12–34. S. R. K. Branavan, Harr Chen, Luke S. Zettlemoyer, and Regina Barzilay. 2009. Reinforcement learning for mapping instructions to actions. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1 - Volume 1. Association for Computational Linguistics, Stroudsburg, PA, USA, ACL ’09, pages 82–90. John D. Bransford, Ann L. Brown, and Rodney R. Cocking. 2000. How People Learn: Brain, Mind, Experience, and School: Expanded Edition. National Academy Press., Washington, DC. Maya Cakmak and Andrea L. Thomaz. 2012. Designing robot learners that ask good questions. In Proceedings of the 7th Annual ACM/IEEE International Conference on Human-Robot Interaction. ACM, New York, NY, USA, HRI ’12, pages 17–24. R. Cantrell, K. Talamadupula, P. Schermerhorn, J. Benton, S. Kambhampati, and M. Scheutz. 2012. Tell me when and why to do it! run-time planner model updates via natural language instruction. In Proceedings of the 7th Annual ACM/IEEE International Conference on Human-Robot Interaction. ACM, New York, NY, USA, HRI ’12, pages 471– 478. Sonia Chernova and Manuela Veloso. 2008. Teaching multi-robot coordination using demonstration of communication and state sharing. In Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems-Volume 3. International Foundation for Autonomous Agents and Multiagent Systems, pages 1183–1186. Rui Fang, Malcolm Doering, and Joyce Y. Chai. 2014. Collaborative models for referring expression generation in situated dialogue. In Proceedings of the 28th AAAI Conference on Artificial Intelligence. AAAI Press, AAAI’14, pages 1544–1550. Qiaozi Gao, Malcolm Doering, Shaohua Yang, and Joyce Y. Chai. 2016. Physical causality of action verbs in grounded language understanding. In ACL (1). The Association for Computer Linguistics. P. Gorniak and D. Roy. 2007. Situated language understanding as filtering perceived affordances. In Cognitive Science, volume 31(2), pages 197–231. Joohyun Kim and Raymond J. Mooney. 2012. Unsupervised pcfg induction for grounded language learning with highly ambiguous supervision. In Proceedings of the Conference on Empirical Methods in Natural Language Processing and Natural Language Learning (EMNLP-CoNLL ’12). Jeju Island, Korea, pages 433–444. W. Bradley Knox and Peter Stone. 2011. Understanding human teaching modalities in reinforcement learning environments: A preliminary report. In IJCAI 2011 Workshop on Agents Learning Interactively from Human Teachers (ALIHT). Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc’Aurelio Ranzato, and Jason Weston. 2017. Learning through dialogue interactions. In ICLR. Changsong Liu and Joyce Y. Chai. 2015. Learning to mediate perceptual differences in situated humanrobot dialogue. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence. AAAI Press, AAAI’15, pages 2288–2294. Changsong Liu, Lanbo She, Rui Fang, and Joyce Y. Chai. 2014. Probabilistic labeling for efficient referential grounding based on collaborative discourse. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 13–18. Changsong Liu, Shaohua Yang, Sari Saba-Sadiya, Nishant Shukla, Yunzhong He, Song-chun Zhu, and Joyce Y. Chai. 2016. Jointly learning grounded task structures from language instruction and visual demonstration. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 1482–1492. Cynthia Matuszek, Nicholas Fitzgerald, Luke Zettlemoyer, Liefeng Bo, and Dieter Fox. 2012a. A joint model of language and perception for grounded attribute learning. In John Langford and Joelle Pineau, editors, Proceedings of the 29th International Conference on Machine Learning (ICML-12). ACM, New York, NY, USA, pages 1671–1678. Cynthia Matuszek, Evan Herbst, Luke S. Zettlemoyer, and Dieter Fox. 2012b. Learning to parse natural language commands to a robot control system. In Jaydev P. Desai, Gregory Dudek, Oussama Khatib, and Vijay Kumar, editors, ISER. Springer, volume 88 of Springer Tracts in Advanced Robotics, pages 403–415. Dipendra K Misra, Jaeyong Sung, Kevin Lee, and Ashutosh Saxena. 2014. Tell me dave: Contextsensitive grounding of natural language to manipulation instructions. Proceedings of Robotics: Science and Systems (RSS), Berkeley, USA . Dipendra Kumar Misra, Kejia Tao, Percy Liang, and Ashutosh Saxena. 2015. Environment-driven lexicon induction for high-level instructions. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 1643 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 992–1002. Shiwali Mohan, James Kirk, and John Laird. 2013. A computational model for situated task learning with interactive instruction. In Proceedings of ICCM 2013 - 12th International Conference on Cognitive Modeling. Anahita Mohseni-Kabir, Charles Rich, Sonia Chernova, Candace L. Sidner, and Daniel Miller. 2015. Interactive hierarchical task learning from a single demonstration. In Proceedings of the Tenth Annual ACM/IEEE International Conference on HumanRobot Interaction. ACM, HRI ’15, pages 205–212. Kira Mour˜ao, Luke S. Zettlemoyer, Ronald P. A. Petrick, and Mark Steedman. 2012. Learning STRIPS operators from noisy and incomplete observations. In Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence. Catalina Island, CA, USA, pages 614–623. Negin Nejati, Pat Langley, and Tolga Konik. 2006. Learning hierarchical task networks by observation. In Proceedings of the 23rd international conference on Machine learning. ACM, pages 665–672. Tim Paek and Roberto Pieraccini. 2008. Automating spoken dialogue management design using machine learning: An industry perspective. Speech Communication 50(8-9):716–729. Joe Saunders, Chrystopher L Nehaniv, and Kerstin Dautenhahn. 2006. Teaching robots by moulding behavior and scaffolding the environment. In Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction. ACM, pages 118– 125. Jost Schatzmann, Karl Weilhammer, Matt Stuttle, and Steve Young. 2006. A survey of statistical user simulation techniques for reinforcement-learning of dialogue management strategies. Knowl. Eng. Rev. 21(2):97–126. Lanbo She and Joyce Y. Chai. 2016. Incremental acquisition of verb hypothesis space towards physical world interaction. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. Lanbo She, Shaohua Yang, Yu Cheng, Yunyi Jia, Joyce Y. Chai, and Ning Xi. 2014. Back to the blocks world: Learning new actions through situated human-robot dialogue. In Proceedings of the SIGDIAL 2014 Conference, The 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue, 18-20 June 2014, Philadelphia, PA, USA. pages 89–97. Satinder Singh, Diane Litman, Michael Kearns, and Marilyn Walker. 2002. Optimizing dialogue management with reinforcement learning: Experiments with the njfun system. Journal of Artificial Intelligence Research 16:105–133. Pei-Hao Su, Milica Gasic, Nikola Mrkˇsi´c, Lina M. Rojas Barahona, Stefan Ultes, David Vandyke, TsungHsien Wen, and Steve Young. 2016. On-line active reward learning for policy optimisation in spoken dialogue systems. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 2431–2441. Richard S. Sutton and Andrew G. Barto. 1998. Introduction to Reinforcement Learning. MIT Press, Cambridge, MA, USA, 1st edition. Stefanie Tellex, Pratiksha Thaker, Joshua Joseph, and Nicholas Roy. 2014. Learning perceptually grounded word meanings from unaligned parallel data. Machine Learning 94(2):151–167. Jesse Thomason, Jivko Sinapov, Maxwell Svetlik, Peter Stone, and Raymond J. Mooney. 2016. Learning multi-modal grounded linguistic semantics by playing ”i spy”. In Proceedings of the 25th International Joint Conference on Artificial Intelligence (IJCAI16). New York City, pages 3477–3483. Jesse Thomason, Shiqi Zhang, Raymond Mooney, and Peter Stone. 2015. Learning to interpret natural language commands through human-robot dialog. In Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI). pages 1923– 1929. Jason D Williams and Geoffrey Zweig. 2016. Endto-end lstm-based dialog control optimized with supervised and reinforcement learning. arXiv preprint arXiv:1606.01269 . Shaohua Yang, Qiaozi Gao, Changsong Liu, Caiming Xiong, Song-Chun Zhu, and Joyce Y. Chai. 2016. Grounded semantic role labeling. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016. pages 149–159. 1644
2017
150
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1645–1656 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1151 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1645–1656 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1151 Multimodal Word Distributions Ben Athiwaratkun Cornell University [email protected] Andrew Gordon Wilson Cornell University [email protected] Abstract Word embeddings provide point representations of words containing useful semantic information. We introduce multimodal word distributions formed from Gaussian mixtures, for multiple word meanings, entailment, and rich uncertainty information. To learn these distributions, we propose an energy-based max-margin objective. We show that the resulting approach captures uniquely expressive semantic information, and outperforms alternatives, such as word2vec skip-grams, and Gaussian embeddings, on benchmark datasets such as word similarity and entailment. 1 Introduction To model language, we must represent words. We can imagine representing every word with a binary one-hot vector corresponding to a dictionary position. But such a representation contains no valuable semantic information: distances between word vectors represent only differences in alphabetic ordering. Modern approaches, by contrast, learn to map words with similar meanings to nearby points in a vector space (Mikolov et al., 2013a), from large datasets such as Wikipedia. These learned word embeddings have become ubiquitous in predictive tasks. Vilnis and McCallum (2014) recently proposed an alternative view, where words are represented by a whole probability distribution instead of a deterministic point vector. Specifically, they model each word by a Gaussian distribution, and learn its mean and covariance matrix from data. This approach generalizes any deterministic point embedding, which can be fully captured by the mean vector of the Gaussian distribution. Moreover, the full distribution provides much richer information than point estimates for characterizing words, representing probability mass and uncertainty across a set of semantics. However, since a Gaussian distribution can have only one mode, the learned uncertainty in this representation can be overly diffuse for words with multiple distinct meanings (polysemies), in order for the model to assign some density to any plausible semantics (Vilnis and McCallum, 2014). Moreover, the mean of the Gaussian can be pulled in many opposing directions, leading to a biased distribution that centers its mass mostly around one meaning while leaving the others not well represented. In this paper, we propose to represent each word with an expressive multimodal distribution, for multiple distinct meanings, entailment, heavy tailed uncertainty, and enhanced interpretability. For example, one mode of the word ‘bank’ could overlap with distributions for words such as ‘finance’ and ‘money’, and another mode could overlap with the distributions for ‘river’ and ‘creek’. It is our contention that such flexibility is critical for both qualitatively learning about the meanings of words, and for optimal performance on many predictive tasks. In particular, we model each word with a mixture of Gaussians (Section 3.1). We learn all the parameters of this mixture model using a maximum margin energy-based ranking objective (Joachims, 2002; Vilnis and McCallum, 2014) (Section 3.3), where the energy function describes the affinity between a pair of words. For analytic tractability with Gaussian mixtures, we use the inner product between probability distributions in a Hilbert space, known as the expected likelihood kernel (Jebara et al., 2004), as our energy function (Section 3.4). Additionally, we propose transformations for numerical stability and initialization A.2, resulting in a robust, straightforward, and 1645 scalable learning procedure, capable of training on a corpus with billions of words in days. We show that the model is able to automatically discover multiple meanings for words (Section 4.3), and significantly outperform other alternative methods across several tasks such as word similarity and entailment (Section 4.4, 4.5, 4.7). We have made code available at http://github.com/ benathi/word2gm, where we implement our model in Tensorflow (Abadi et. al, 2015). 2 Related Work In the past decade, there has been an explosion of interest in word vector representations. word2vec, arguably the most popular word embedding, uses continuous bag of words and skipgram models, in conjunction with negative sampling for efficient conditional probability estimation (Mikolov et al., 2013a,b). Other popular approaches use feedforward (Bengio et al., 2003) and recurrent neural network language models (Mikolov et al., 2010, 2011b; Collobert and Weston, 2008) to predict missing words in sentences, producing hidden layers that can act as word embeddings that encode semantic information. They employ conditional probability estimation techniques, including hierarchical softmax (Mikolov et al., 2011a; Mnih and Hinton, 2008; Morin and Bengio, 2005) and noise contrastive estimation (Gutmann and Hyv¨arinen, 2012). A different approach to learning word embeddings is through factorization of word cooccurrence matrices such as GloVe embeddings (Pennington et al., 2014). The matrix factorization approach has been shown to have an implicit connection with skip-gram and negative sampling Levy and Goldberg (2014). Bayesian matrix factorization where row and columns are modeled as Gaussians has been explored in Salakhutdinov and Mnih (2008) and provides a different probabilistic perspective of word embeddings. In exciting recent work, Vilnis and McCallum (2014) propose a Gaussian distribution to model each word. Their approach is significantly more expressive than typical point embeddings, with the ability to represent concepts such as entailment, by having the distribution for one word (e.g. ‘music’) encompass the distributions for sets of related words (‘jazz’ and ‘pop’). However, with a unimodal distribution, their approach cannot capture multiple distinct meanings, much like most deterministic approaches. Recent work has also proposed deterministic embeddings that can capture polysemies, for example through a cluster centroid of context vectors (Huang et al., 2012), or an adapted skip-gram model with an EM algorithm to learn multiple latent representations per word (Tian et al., 2014). Neelakantan et al. (2014) also extends skip-gram with multiple prototype embeddings where the number of senses per word is determined by a non-parametric approach. Liu et al. (2015) learns topical embeddings based on latent topic models where each word is associated with multiple topics. Another related work by Nalisnick and Ravi (2015) models embeddings in infinite-dimensional space where each embedding can gradually represent incremental word sense if complex meanings are observed. Probabilistic word embeddings have only recently begun to be explored, and have so far shown great promise. In this paper, we propose, to the best of our knowledge, the first probabilistic word embedding that can capture multiple meanings. We use a Gaussian mixture model which allows for a highly expressive distributions over words. At the same time, we retain scalability and analytic tractability with an expected likelihood kernel energy function for training. The model and training procedure harmonize to learn descriptive representations of words, with superior performance on several benchmarks. 3 Methodology In this section, we introduce our Gaussian mixture (GM) model for word representations, and present a training method to learn the parameters of the Gaussian mixture. This method uses an energy-based maximum margin objective, where we wish to maximize the similarity of distributions of nearby words in sentences. We propose an energy function that compliments the GM model by retaining analytic tractability. We also provide critical practical details for numerical stability and initialization. The code for model training and evaluation is available at http://github. com/benathi/word2gm. 3.1 Word Representation We represent each word w in a dictionary as a Gaussian mixture with K components. Specifically, the distribution of w, fw, is given by the 1646 density fw(⃗x) = K X i=1 pw,i N [⃗x; ⃗µw,i, Σw,i] (1) = K X i=1 pw,i p 2π|Σw,i| e−1 2 (⃗x−⃗µw,i)⊤Σ−1 w,i(⃗x−⃗µw,i) , where PK i=1 pw,i = 1. The mean vectors ⃗µw,i represent the location of the ith component of word w, and are akin to the point embeddings provided by popular approaches like word2vec. pw,i represents the component probability (mixture weight), and Σw,i is the component covariance matrix, containing uncertainty information. Our goal is to learn all of the model parameters ⃗µw,i, pw,i, Σw,i from a corpus of natural sentences to extract semantic information of words. Each Gaussian component’s mean vector of word w can represent one of the word’s distinct meanings. For instance, one component of a polysemous word such as ‘rock’ should represent the meaning related to ‘stone’ or ‘pebbles’, whereas another component should represent the meaning related to music such as ‘jazz’ or ‘pop’. Figure 1 illustrates our word embedding model, and the difference between multimodal and unimodal representations, for words with multiple meanings. 3.2 Skip-Gram The training objective for learning θ = {⃗µw,i, pw,i, Σw,i} draws inspiration from the continuous skip-gram model (Mikolov et al., 2013a), where word embeddings are trained to maximize the probability of observing a word given another nearby word. This procedure follows the distributional hypothesis that words occurring in natural contexts tend to be semantically related. For instance, the words ‘jazz’ and ‘music’ tend to occur near one another more often than ‘jazz’ and ‘cat’; hence, ‘jazz’ and ‘music’ are more likely to be related. The learned word representation contains useful semantic information and can be used to perform a variety of NLP tasks such as word similarity analysis, sentiment classification, modelling word analogies, or as a preprocessed input for complex system such as statistical machine translation. music rock jazz basalt pop stone rock stone jazz pop music basalt music jazz rock basalt pop stone rock music rock jazz basalt stone pop rock basalt stone music jazz pop Figure 1: Top: A Gaussian Mixture embedding, where each component corresponds to a distinct meaning. Each Gaussian component is represented by an ellipsoid, whose center is specified by the mean vector and contour surface specified by the covariance matrix, reflecting subtleties in meaning and uncertainty. On the left, we show examples of Gaussian mixture distributions of words where Gaussian components are randomly initialized. After training, we see on the right that one component of the word ‘rock’ is closer to ‘stone’ and ‘basalt’, whereas the other component is closer to ‘jazz’ and ‘pop’. We also demonstrate the entailment concept where the distribution of the more general word ‘music’ encapsulates words such as ‘jazz’, ‘rock’, ‘pop’. Bottom: A Gaussian embedding model (Vilnis and McCallum, 2014). For words with multiple meanings, such as ‘rock’, the variance of the learned representation becomes unnecessarily large in order to assign some probability to both meanings. Moreover, the mean vector for such words can be pulled between two clusters, centering the mass of the distribution on a region which is far from certain meanings. 3.3 Energy-based Max-Margin Objective Each sample in the objective consists of two pairs of words, (w, c) and (w, c′). w is sampled from a sentence in a corpus and c is a nearby word within a context window of length ℓ. For instance, a word w = ‘jazz’ which occurs in the sentence ‘I listen to jazz music’ has context words (‘I’, ‘listen’, ‘to’ , ‘music’). c′ is a negative context word (e.g. ‘airplane’) obtained from random sampling. The objective is to maximize the energy between words that occur near each other, w and c, and minimize the energy between w and its negative context c′. This approach is similar to neg1647 ative sampling (Mikolov et al., 2013a,b), which contrasts the dot product between positive context pairs with negative context pairs. The energy function is a measure of similarity between distributions and will be discussed in Section 3.4. We use a max-margin ranking objective (Joachims, 2002), used for Gaussian embeddings in Vilnis and McCallum (2014), which pushes the similarity of a word and its positive context higher than that of its negative context by a margin m: Lθ(w, c, c′) = max(0, m −log Eθ(w, c) + log Eθ(w, c′)) This objective can be minimized by mini-batch stochastic gradient descent with respect to the parameters θ = {⃗µw,i, pw,i, Σw,i} – the mean vectors, covariance matrices, and mixture weights – of our multimodal embedding in Eq. (1). Word Sampling We use a word sampling scheme similar to the implementation in word2vec (Mikolov et al., 2013a,b) to balance the importance of frequent words and rare words. Frequent words such as ‘the’, ‘a’, ‘to’ are not as meaningful as relatively less frequent words such as ‘dog’, ‘love’, ‘rock’, and we are often more interested in learning the semantics of the less frequently observed words. We use subsampling to improve the performance of learning word vectors (Mikolov et al., 2013b). This technique discards word wi with probability P(wi) = 1 − p t/f(wi), where f(wi) is the frequency of word wi in the training corpus and t is a frequency threshold. To generate negative context words, each word type wi is sampled according to a distribution Pn(wi) ∝U(wi)3/4 which is a distorted version of the unigram distribution U(wi) that also serves to diminish the relative importance of frequent words. Both subsampling and the negative distribution choice are proven effective in word2vec training (Mikolov et al., 2013b). 3.4 Energy Function For vector representations of words, a usual choice for similarity measure (energy function) is a dot product between two vectors. Our word representations are distributions instead of point vectors and therefore need a measure that reflects not only the point similarity, but also the uncertainty. We propose to use the expected likelihood kernel, which is a generalization of an inner product between vectors to an inner product between distributions (Jebara et al., 2004). That is, E(f, g) = Z f(x)g(x) dx = ⟨f, g⟩L2 where ⟨·, ·⟩L2 denotes the inner product in Hilbert space L2. We choose this form of energy since it can be evaluated in a closed form given our choice of probabilistic embedding in Eq. (1). For Gaussian mixtures f, g representing the words wf, wg, f(x) = PK i=1 piN(x; ⃗µf,i, Σf,i) and g(x) = PK i=1 qiN(x; ⃗µg,i, Σg,i), PK i=1 pi = 1, and PK i=1 qi = 1, we find (see Section A.1) the log energy is log Eθ(f, g) = log K X j=1 K X i=1 piqjeξi,j (2) where ξi,j ≡log N(0; ⃗µf,i −⃗µg,j, Σf,i + Σg,j) = −1 2 log det(Σf,i + Σg,j) −D 2 log(2π) −1 2(⃗µf,i −⃗µg,j)⊤(Σf,i + Σg,j)−1(⃗µf,i −⃗µg,j) (3) We call the term ξi,j partial (log) energy. Observe that this term captures the similarity between the ith meaning of word wf and the jth meaning of word wg. The total energy in Equation 2 is the sum of possible pairs of partial energies, weighted accordingly by the mixture probabilities pi and qj. The term −(⃗µf,i−⃗µg,j)⊤(Σf,i+Σg,j)−1(⃗µf,i− ⃗µg,j) in ξi,j explains the difference in mean vectors of semantic pair (wf, i) and (wg, j). If the semantic uncertainty (covariance) for both pairs are low, this term has more importance relative to other terms due to the inverse covariance scaling. We observe that the loss function Lθ in Section 3.3 attains a low value when Eθ(w, c) is relatively high. High values of Eθ(w, c) can be achieved when the component means across different words ⃗µf,i and ⃗µg,j are close together (e.g., similar point representations). High energy can also be achieved by large values of Σf,i and Σg,j, which washes out the importance of the mean vector difference. The term −log det(Σf,i +Σg,j) serves as a regularizer that prevents the covariances from being pushed too high at the expense of learning a good mean embedding. 1648 At the beginning of training, ξi,j roughly are on the same scale among all pairs (i, j)’s. During this time, all components learn the signals from the word occurrences equally. As training progresses and the semantic representation of each mixture becomes more clear, there can be one term of ξi,j’s that is predominantly higher than other terms, giving rise to a semantic pair that is most related. The negative KL divergence is another sensible choice of energy function, providing an asymmetric metric between word distributions. However, unlike the expected likelihood kernel, KL divergence does not have a closed form if the two distributions are Gaussian mixtures. 4 Experiments We have introduced a model for multi-prototype embeddings, which expressively captures word meanings with whole probability distributions. We show that our combination of energy and objective functions, proposed in Section 3, enables one to learn interpretable multimodal distributions through unsupervised training, for describing words with multiple distinct meanings. By representing multiple distinct meanings, our model also reduces the unnecessarily large variance of a Gaussian embedding model, and has improved results on word entailment tasks. To learn the parameters of the proposed mixture model, we train on a concatenation of two datasets: UKWAC (2.5 billion tokens) and Wackypedia (1 billion tokens) (Baroni et al., 2009). We discard words that occur fewer than 100 times in the corpus, which results in a vocabulary size of 314, 129 words. Our word sampling scheme, described at the end of Section 4.3, is similar to that of word2vec with one negative context word for each positive context word. After training, we obtain learned parameters {⃗µw,i, Σw,i, pi}K i=1 for each word w. We treat the mean vector ⃗µw,i as the embedding of the ith mixture component with the covariance matrix Σw,i representing its subtlety and uncertainty. We perform qualitative evaluation to show that our embeddings learn meaningful multi-prototype representations and compare to existing models using a quantitative evaluation on word similarity datasets and word entailment. We name our model as Word to Gaussian Mixture (w2gm) in constrast to Word to Gaussian (w2g) (Vilnis and McCallum, 2014). Unless stated otherwise, w2g refers to our implementation of w2gm model with one mixture component. 4.1 Hyperparameters Unless stated otherwise, we experiment with K = 2 components for the w2gm model, but we have results and discussion of K = 3 at the end of section 4.3. We primarily consider the spherical case for computational efficiency. We note that for diagonal or spherical covariances, the energy can be computed very efficiently since the matrix inversion would simply require O(d) computation instead of O(d3) for a full matrix. Empirically, we have found diagonal covariance matrices become roughly spherical after training. Indeed, for these relatively high dimensional embeddings, there are sufficient degrees of freedom for the mean vectors to be learned such that the covariance matrices need not be asymmetric. Therefore, we perform all evaluations with spherical covariance models. Models used for evaluation have dimension D = 50 and use context window ℓ= 10 unless stated otherwise. We provide additional hyperparameters and training details in the supplementary material (A.2). 4.2 Similarity Measures Since our word embeddings contain multiple vectors and uncertainty parameters per word, we use the following measures that generalizes similarity scores. These measures pick out the component pair with maximum similarity and therefore determine the meanings that are most relevant. 4.2.1 Expected Likelihood Kernel A natural choice for a similarity score is the expected likelihood kernel, an inner product between distributions, which we discussed in Section 3.4. This metric incorporates the uncertainty from the covariance matrices in addition to the similarity between the mean vectors. 4.2.2 Maximum Cosine Similarity This metric measures the maximum similarity of mean vectors among all pairs of mixture components between distributions f and g. That is, d(f, g) = max i,j=1,...,K ⟨µf,i, µg,j⟩ ||µf,i|| · ||µg,j||, which corresponds to matching the meanings of f and g that are the most similar. For a Gaussian embedding, maximum similarity reduces to the usual cosine similarity. 1649 Word Co. Nearest Neighbors rock 0 basalt:1, boulder:1, boulders:0, stalagmites:0, stalactites:0, rocks:1, sand:0, quartzite:1, bedrock:0 rock 1 rock/:1, ska:0, funk:1, pop-rock:1, punk:1, indie-rock:0, band:0, indie:0, pop:1 bank 0 banks:1, mouth:1, river:1, River:0, confluence:0, waterway:1, downstream:1, upstream:0, dammed:0 bank 1 banks:0, banking:1, banker:0, Banks:1, bankas:1, Citibank:1, Interbank:1, Bankers:0, transactions:1 Apple 0 Strawberry:0, Tomato:1, Raspberry:1, Blackberry:1, Apples:0, Pineapple:1, Grape:1, Lemon:0 Apple 1 Macintosh:1, Mac:1, OS:1, Amiga:0, Compaq:0, Atari:1, PC:1, Windows:0, iMac:0 star 0 stars:0, Quaid:0, starlet:0, Dafoe:0, Stallone:0, Geena:0, Niro:0, Zeta-Jones:1, superstar:0 star 1 stars:1, brightest:0, Milky:0, constellation:1, stellar:0, nebula:1, galactic:1, supernova:1, Ophiuchus:1 cell 0 cellular:0, Nextel:0, 2-line:0, Sprint:0, phones.:1, pda:1, handset:0, handsets:1, pushbuttons:0 cell 1 cytoplasm:0, vesicle:0, cytoplasmic:1, macrophages:0, secreted:1, membrane:0, mitotic:0, endocytosis:1 left 0 After:1, back:0, finally:1, eventually:0, broke:0, joined:1, returned:1, after:1, soon:0 left 1 right-hand:0, hand:0, right:0, left-hand:0, lefthand:0, arrow:0, turn:0, righthand:0, Left:0 Word Nearest Neighbors rock band, bands, Rock, indie, Stones, breakbeat, punk, electronica, funk bank banks, banking, trader, trading, Bank, capital, Banco, bankers, cash Apple Macintosh, Microsoft, Windows, Macs, Lite, Intel, Desktop, WordPerfect, Mac star stars, stellar, brightest, Stars, Galaxy, Stardust, eclipsing, stars., Star cell cells, DNA, cellular, cytoplasm, membrane, peptide, macrophages, suppressor, vesicles left leaving, turned, back, then, After, after, immediately, broke, end Table 1: Nearest neighbors based on cosine similarity between the mean vectors of Gaussian components for Gaussian mixture embedding (top) (for K = 2) and Gaussian embedding (bottom). The notation w:i denotes the ith mixture component of the word w. 4.2.3 Minimum Euclidean Distance Cosine similarity is popular for evaluating embeddings. However, our training objective directly involves the Euclidean distance in Eq. (3), as opposed to dot product of vectors such as in word2vec. Therefore, we also consider the Euclidean metric: d(f, g) = min i,j=1,...,K[||µf,i−µg,j||]. 4.3 Qualitative Evaluation In Table 1, we show examples of polysemous words and their nearest neighbors in the embedding space to demonstrate that our trained embeddings capture multiple word senses. For instance, a word such as ‘rock’ that could mean either ‘stone’ or ‘rock music’ should have each of its meanings represented by a distinct Gaussian component. Our results for a mixture of two Gaussians model confirm this hypothesis, where we observe that the 0th component of ‘rock’ being related to (‘basalt’, ‘boulders’) and the 1st component being related to (‘indie’, ‘funk’, ‘hip-hop’). Similarly, the word bank has its 0th component representing the river bank and the 1st component representing the financial bank. By contrast, in Table 1 (bottom), see that for Gaussian embeddings with one mixture component, nearest neighbors of polysemous words are predominantly related to a single meaning. For instance, ‘rock’ mostly has neighbors related to rock music and ‘bank’ mostly related to the financial bank. The alternative meanings of these polysemous words are not well represented in the embeddings. As a numerical example, the cosine similarity between ‘rock’ and ‘stone’ for the Gaussian representation of Vilnis and McCallum (2014) is only 0.029, much lower than the cosine similarity 0.586 between the 0th component of ‘rock’ and ‘stone’ in our multimodal representation. In cases where a word only has a single popular meaning, the mixture components can be fairly close; for instance, one component of ‘stone’ is close to (‘stones’, ‘stonework’, ‘slab’) and the other to (‘carving, ‘relic’, ‘excavated’), which reflects subtle variations in meanings. In general, the mixture can give properties such as heavy tails and more interesting unimodal characterizations of uncertainty than could be described by a single Gaussian. Embedding Visualization We provide an interactive visualization as part of our code repository: https://github.com/benathi/ word2gm#visualization that allows realtime queries of words’ nearest neighbors (in the embeddings tab) for K = 1, 2, 3 components. We use a notation similar to that of Table 1, where a token w:i represents the component i of a word w. For instance, if in the K = 2 link we search for bank:0, we obtain the nearest neigh1650 bors such as river:1, confluence:0, waterway:1, which indicates that the 0th component of ‘bank’ has the meaning ‘river bank’. On the other hand, searching for bank:1 yields nearby words such as banking:1, banker:0, ATM:0, indicating that this component is close to the ‘financial bank’. We also have a visualization of a unimodal (w2g) for comparison in the K = 1 link. In addition, the embedding link for our Gaussian mixture model with K = 3 mixture components can learn three distinct meanings. For instance, each of the three components of ‘cell’ is close to (‘keypad’, ‘digits’), (‘incarcerated’, ‘inmate’) or (‘tissue’, ‘antibody’), indicating that the distribution captures the concept of ‘cellphone’, ‘jail cell’, or ‘biological cell’, respectively. Due to the limited number of words with more than 2 meanings, our model with K = 3 does not generally offer substantial performance differences to our model with K = 2; hence, we do not further display K = 3 results for compactness. 4.4 Word Similarity We evaluate our embeddings on several standard word similarity datasets, namely, SimLex (Hill et al., 2014), WS or WordSim-353, WS-S (similarity), WS-R (relatedness) (Finkelstein et al., 2002), MEN (Bruni et al., 2014), MC (Miller and Charles, 1991), RG (Rubenstein and Goodenough, 1965), YP (Yang and Powers, 2006), MTurk(287,-771) (Radinsky et al., 2011; Halawi et al., 2012), and RW (Luong et al., 2013). Each dataset contains a list of word pairs with a human score of how related or similar the two words are. We calculate the Spearman correlation (Spearman, 1904) between the labels and our scores generated by the embeddings. The Spearman correlation is a rank-based correlation measure that assesses how well the scores describe the true labels. The correlation results are shown in Table 2 using the scores generated from the expected likelihood kernel, maximum cosine similarity, and maximum Euclidean distance. We show the results of our Gaussian mixture model and compare the performance with that of word2vec and the original Gaussian embedding by Vilnis and McCallum (2014). We note that our model of a unimodal Gaussian embedding w2g also outperforms the original model, which differs in model hyperparameters and initialization, for most datasets. Our multi-prototype model w2gm also performs better than skip-gram or Gaussian embedding methods on many datasets, namely, WS, WS-R, MEN, MC, RG, YP, MT-287, RW. The maximum cosine similarity yields the best performance on most datasets; however, the minimum Euclidean distance is a better metric for the datasets MC and RW. These results are consistent for both the single-prototype and the multi-prototype models. We also compare out results on WordSim-353 with the multi-prototype embedding method by Huang et al. (2012) and Neelakantan et al. (2014), shown in Table 3. We observe that our singleprototype model w2g is competitive compared to models by Huang et al. (2012), even without using a corpus with stop words removed. This could be due to the auto-calibration of importance via the covariance learning which decrease the importance of very frequent words such as ‘the’, ‘to’, ‘a’, etc. Moreover, our multi-prototype model substantially outperforms the model of Huang et al. (2012) and the MSSG model of Neelakantan et al. (2014) on the WordSim-353 dataset. 4.5 Word Similarity for Polysemous Words We use the dataset SCWS introduced by Huang et al. (2012), where word pairs are chosen to have variations in meanings of polysemous and homonymous words. We compare our method with multiprototype models by Huang (Huang et al., 2012), Tian (Tian et al., 2014), Chen (Chen et al., 2014), and MSSG model by (Neelakantan et al., 2014). We note that Chen model uses an external lexical source WordNet that gives it an extra advantage. We use many metrics to calculate the scores for the Spearman correlation. MaxSim refers to the maximum cosine similarity. AveSim is the average of cosine similarities with respect to the component probabilities. In Table 4, the model w2g performs the best among all single-prototype models for either 50 or 200 vector dimensions. Our model w2gm performs competitively compared to other multiprototype models. In SCWS, the gain in flexibility in moving to a probability density approach appears to dominate over the effects of using a multiprototype. In most other examples, we see w2gm surpass w2g, where the multi-prototype structure is just as important for good performance as the 1651 Dataset sg* w2g* w2g/mc w2g/el w2g/me w2gm/mc w2gm/el w2gm/me SL 29.39 32.23 29.35 25.44 25.43 29.31 26.02 27.59 WS 59.89 65.49 71.53 61.51 64.04 73.47 62.85 66.39 WS-S 69.86 76.15 76.70 70.57 72.3 76.73 70.08 73.3 WS-R 53.03 58.96 68.34 54.4 55.43 71.75 57.98 60.13 MEN 70.27 71.31 72.58 67.81 65.53 73.55 68.5 67.7 MC 63.96 70.41 76.48 72.70 80.66 79.08 76.75 80.33 RG 70.01 71 73.30 72.29 72.12 74.51 71.55 73.52 YP 39.34 41.5 41.96 38.38 36.41 45.07 39.18 38.58 MT-287 64.79 57.5 58.31 66.60 57.24 60.61 MT-771 60.86 55.89 54.12 60.82 57.26 56.43 RW 28.78 32.34 33.16 28.62 31.64 35.27 Table 2: Spearman correlation for word similarity datasets. The models sg, w2g, w2gm denote word2vec skip-gram, Gaussian embedding, and Gaussian mixture embedding (K=2). The measures mc, el, me denote maximum cosine similarity, expected likelihood kernel, and minimum Euclidean distance. For each of w2g and w2gm, we underline the similarity metric with the best score. For each dataset, we boldface the score with the best performance across all models. The correlation scores for sg*, w2g* are taken from Vilnis and McCallum (2014) and correspond to cosine distance. MODEL ρ × 100 HUANG 64.2 HUANG* 71.3 MSSG 50D 63.2 MSSG 300D 71.2 W2G 70.9 W2GM 73.5 Table 3: Spearman’s correlation (ρ) on WordSim353 datasets for our Word to Gaussian Mixture embeddings, as well as the multi-prototype embedding by Huang et al. (2012) and the MSSG model by Neelakantan et al. (2014). Huang* is trained using data with all stop words removed. All models have dimension D = 50 except for MSSG 300D with D = 300 which is still outperformed by our w2gm model. probabilistic representation. 4.6 Reduction in Variance of Polysemous Words One motivation for our Gaussian mixture embedding is to model word uncertainty more accurately than Gaussian embeddings, which can have overly large variances for polysemous words (in order to assign some mass to all of the distinct meanings). We see that our Gaussian mixture model does indeed reduce the variances of each component for such words. For instance, we observe that the word rock in w2g has much higher variance per dimension (e−1.8 ≈1.65) compared to that of Gaussian components of rock in w2gm (which has variance of roughly e−2.5 ≈0.82). We also MODEL DIMENSION ρ × 100 WORD2VEC SKIP-GRAM 50 61.7 HUANG-S 50 58.6 W2G 50 64.7 CHEN-S 200 64.2 W2G 200 66.2 HUANG-M AVGSIM 50 62.8 TIAN-M MAXSIM 50 63.6 W2GM MAXSIM 50 62.7 MSSG AVGSIM 50 64.2 CHEN-M AVGSIM 200 66.2 W2GM MAXSIM 200 65.5 Table 4: Spearman’s correlation ρ on dataset SCWS. We show the results for single prototype (top) and multi-prototype (bottom) The suffix -(S,M) refers to single and multiple prototype models, respectively. see, in the next section, that the Gaussian mixture model has desirable quantitative behavior for word entailment. 4.7 Word Entailment We evaluate our embeddings on the word entailment dataset from Baroni et al. (2012). The lexical entailment between words is denoted by w1 |= w2 which means that all instances of w1 are w2. The entailment dataset contains positive pairs such as aircraft |= vehicle and negative pairs such as aircraft ̸|= insect. We generate entailment scores of word pairs and find the best threshold, measured by Average Precision (AP) or F1 score, which identifies negative versus positive entailment. We use the max1652 MODEL SCORE BEST AP BEST F1 W2G (5) COS 73.1 76.4 W2G (5) KL 73.7 76.0 W2GM (5) COS 73.6 76.3 W2GM (5) KL 75.7 77.9 W2G (10) COS 73.0 76.1 W2G (10) KL 74.2 76.1 W2GM (10) COS 72.9 75.6 W2GM (10) KL 74.7 76.3 Table 5: Entailment results for models w2g and w2gm with window size 5 and 10. The metrics used are the maximum cosine similarity, or the maximum negative KL divergence. We calculate the best average precision as well as the best F1 score. In most cases, w2gm outperforms w2g for describing entailment. imum cosine similarity and the minimum KL divergence, d(f, g) = min i,j=1,...,K KL(f||g), for entailment scores. The minimum KL divergence is similar to the maximum cosine similarity, but also incorporates the embedding uncertainty. In addition, KL divergence is an asymmetric measure, which is more suitable for certain tasks such as word entailment where a relationship is unidirectional. For instance, w1 |= w2 does not imply w2 |= w1. Indeed, aircraft |= vehicle does not imply vehicle |= aircraft, since all aircraft are vehicles but not all vehicles are aircraft. The difference between KL(w1||w2) versus KL(w2||w1) distinguishes which word distribution encompasses another distribution, as demonstrated in Figure 1. Table 5 shows the results of our w2gm model versus the Gaussian embedding model w2g. We observe a trend for both models with window size 5 and 10 that the KL metric yields improvement (both AP and F1) over cosine similarity. In addition, w2gm has a better performance compared to w2g. The multi-prototype model estimates the meaning uncertainty better since it is no longer constrained to be unimodal, leading to better characterizations of entailment. On the other hand, the Gaussian embedding model suffers from large variance problem for polysemous words, which results in less informative word distribution and inferior entailment scores. 5 Discussion We introduced a model that represents words with expressive multimodal distributions formed from Gaussian mixtures. To learn the properties of each mixture, we proposed an analytic energy function for combination with a maximum margin objective. The resulting embeddings capture different semantics of polysemous words, uncertainty, and entailment, and also perform favorably on word similarity benchmarks. Elsewhere, latent probabilistic representations are proving to be exceptionally valuable, able to capture nuances such as face angles with variational autoencoders (Kingma and Welling, 2013) or subtleties in painting strokes with the InfoGAN (Chen et al., 2016). Moreover, classically deterministic deep learning architectures are now being generalized to probabilistic deep models, for full predictive distributions instead of point estimates, and significantly more expressive representations (Wilson et al., 2016b,a; Al-Shedivat et al., 2016; Gan et al., 2016; Fortunato et al., 2017). Similarly, probabilistic word embeddings can capture a range of subtle meanings, and advance the state of the art in predictive tasks. Multimodal word distributions naturally represent our belief that words do not have single precise meanings: indeed, the shape of a word distribution can express much more semantic information than any point representation. In the future, multimodal word distributions could open the doors to a new suite of applications in language modelling, where whole word distributions are used as inputs to new probabilistic LSTMs, or in decision functions where uncertainty matters. As part of this effort, we can explore different metrics between distributions, such as KL divergences, which would be a natural choice for order embeddings that model entailment properties. It would also be informative to explore inference over the number of components in mixture models for word distributions. Such an approach could potentially discover an unbounded number of distinct meanings for words, but also distribute the support of each word distribution to express highly nuanced meanings. Alternatively, we could imagine a dependent mixture model where the distributions over words are evolving with time and other covariates. One could also build new types of supervised language models, constructed to more fully leverage the rich information provided by word distributions. Acknowledgements We thank NSF IIS-1563887 for support. 1653 References Maruan Al-Shedivat, Andrew Gordon Wilson, Yunus Saatchi, Zhiting Hu, and Eric P Xing. 2016. Learning scalable deep kernels with recurrent structure. arXiv preprint arXiv:1610.08936 . Marco Baroni, Raffaella Bernardi, Ngoc-Quynh Do, and Chung-chieh Shan. 2012. Entailment above the word level in distributional semantics. In EACL 2012, 13th Conference of the European Chapter of the Association for Computational Linguistics, Avignon, France, April 23-27, 2012. pages 23–32. http://aclweb.org/anthologynew/E/E12/E12-1004.pdf. Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The wacky wide web: a collection of very large linguistically processed web-crawled corpora. Language Resources and Evaluation 43(3):209– 226. https://doi.org/10.1007/s10579-009-9081-4. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. Journal of Machine Learning Research 3:1137– 1155. Elia Bruni, Nam Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. J. Artif. Int. Res. 49(1):1–47. Xi Chen, Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. 2016. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain. pages 2172–2180. Xinxiong Chen, Zhiyuan Liu, and Maosong Sun. 2014. A unified model for word sense representation and disambiguation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL. pages 1025– 1035. http://aclweb.org/anthology/D/D14/D14-1110.pdf. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: deep neural networks with multitask learning. In Machine Learning, Proceedings of the Twenty-Fifth International Conference (ICML 2008), Helsinki, Finland, June 5-9, 2008. pages 160–167. John C. Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12:2121–2159. Mart´ın Abadi et al. 2015. TensorFlow: Large-scale machine learning on heterogeneous systems. Software available from tensorflow.org. Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2002. Placing search in context: the concept revisited. ACM Trans. Inf. Syst. 20(1):116–131. Meire Fortunato, Charles Blundell, and Oriol Vinyals. 2017. Bayesian recurrent neural networks. arXiv preprint arXiv:1704.02798 . Zhe Gan, Chunyuan Li, Changyou Chen, Yunchen Pu, Qinliang Su, and Lawrence Carin. 2016. Scalable bayesian learning of recurrent neural networks for language modeling. arXiv preprint arXiv:1611.08034 . Michael Gutmann and Aapo Hyv¨arinen. 2012. Noisecontrastive estimation of unnormalized statistical models, with applications to natural image statistics. Journal of Machine Learning Research 13:307–361. Guy Halawi, Gideon Dror, Evgeniy Gabrilovich, and Yehuda Koren. 2012. Large-scale learning of word relatedness with constraints. In The 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’12, Beijing, China, August 12-16, 2012. pages 1406–1414. Felix Hill, Roi Reichart, and Anna Korhonen. 2014. Simlex999: Evaluating semantic models with (genuine) similarity estimation. CoRR abs/1408.3456. Eric H. Huang, Richard Socher, Christopher D. Manning, and Andrew Y. Ng. 2012. Improving word representations via global context and multiple word prototypes. In The 50th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, July 8-14, 2012, Jeju Island, Korea - Volume 1: Long Papers. pages 873– 882. http://www.aclweb.org/anthology/P12-1092. Tony Jebara, Risi Kondor, and Andrew Howard. 2004. Probability product kernels. Journal of Machine Learning Research 5:819–844. Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, July 23-26, 2002, Edmonton, Alberta, Canada. pages 133–142. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR abs/1412.6980. Diederik P. Kingma and Max Welling. 2013. Autoencoding variational bayes. CoRR abs/1312.6114. http://arxiv.org/abs/1312.6114. Y. LeCun, L. Bottou, G. Orr, and K. Muller. 1998. Efficient backprop. In G. Orr and Muller K., editors, Neural Networks: Tricks of the trade. Springer. Omer Levy and Yoav Goldberg. 2014. Neural word embedding as implicit matrix factorization. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada. pages 2177–2185. Yang Liu, Zhiyuan Liu, Tat-Seng Chua, and Maosong Sun. 2015. Topical word embeddings. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, January 25-30, 2015, Austin, Texas, USA.. pages 2418–2424. http://www.aaai.org/ocs/index.php/AAAI/AAAI15/paper/view/9314. Minh-Thang Luong, Richard Socher, and Christopher D. Manning. 2013. Better word representations with recursive neural networks for morphology. In CoNLL. Sofia, Bulgaria. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. CoRR abs/1301.3781. 1654 Tomas Mikolov, Anoop Deoras, Daniel Povey, Luk´as Burget, and Jan Cernock´y. 2011a. Strategies for training large scale neural network language models. In 2011 IEEE Workshop on Automatic Speech Recognition & Understanding, ASRU 2011, Waikoloa, HI, USA, December 11-15, 2011. pages 196–201. https://doi.org/10.1109/ASRU.2011.6163930. Tomas Mikolov, Martin Karafi´at, Luk´as Burget, Jan Cernock´y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH 2010, 11th Annual Conference of the International Speech Communication Association, Makuhari, Chiba, Japan, September 26-30, 2010. pages 1045–1048. Tomas Mikolov, Stefan Kombrink, Luk´as Burget, Jan Cernock´y, and Sanjeev Khudanpur. 2011b. Extensions of recurrent neural network language model. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2011, May 22-27, 2011, Prague Congress Center, Prague, Czech Republic. pages 5528–5531. https://doi.org/10.1109/ICASSP.2011.5947611. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States.. pages 3111–3119. George A. Miller and Walter G. Charles. 1991. Contextual Correlates of Semantic Similarity. Language & Cognitive Processes 6(1):1–28. https://doi.org/10.1080/01690969108406936. Andriy Mnih and Geoffrey E. Hinton. 2008. A scalable hierarchical distributed language model. In Advances in Neural Information Processing Systems 21, Proceedings of the Twenty-Second Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 8-11, 2008. pages 1081–1088. Frederic Morin and Yoshua Bengio. 2005. Hierarchical probabilistic neural network language model. In Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics, AISTATS 2005, Bridgetown, Barbados, January 6-8, 2005. Eric T. Nalisnick and Sachin Ravi. 2015. Infinite dimensional word embeddings. CoRR abs/1511.05392. Arvind Neelakantan, Jeevan Shankar, Alexandre Passos, and Andrew McCallum. 2014. Efficient non-parametric estimation of multiple embeddings per word in vector space. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL. pages 1059–1069. http://aclweb.org/anthology/D/D14/D14-1113.pdf. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL. pages 1532– 1543. http://aclweb.org/anthology/D/D14/D14-1162.pdf. Kira Radinsky, Eugene Agichtein, Evgeniy Gabrilovich, and Shaul Markovitch. 2011. A word at a time: Computing word relatedness using temporal semantic analysis. In Proceedings of the 20th International Conference on World Wide Web. WWW ’11, pages 337–346. Herbert Rubenstein and John B. Goodenough. 1965. Contextual correlates of synonymy. Commun. ACM 8(10):627– 633. Ruslan Salakhutdinov and Andriy Mnih. 2008. Bayesian probabilistic matrix factorization using markov chain monte carlo. In Machine Learning, Proceedings of the Twenty-Fifth International Conference (ICML 2008), Helsinki, Finland, June 5-9, 2008. pages 880–887. https://doi.org/10.1145/1390156.1390267. C. Spearman. 1904. The proof and measurement of association between two things. American Journal of Psychology 15:88–103. Fei Tian, Hanjun Dai, Jiang Bian, Bin Gao, Rui Zhang, Enhong Chen, and Tie-Yan Liu. 2014. A probabilistic model for learning multi-prototype word embeddings. In COLING 2014, 25th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, August 23-29, 2014, Dublin, Ireland. pages 151–160. http://aclweb.org/anthology/C/C14/C141016.pdf. Luke Vilnis and Andrew McCallum. 2014. Word representations via gaussian embedding. CoRR abs/1412.6623. Andrew G Wilson, Zhiting Hu, Ruslan R Salakhutdinov, and Eric P Xing. 2016a. Stochastic variational deep kernel learning. In Advances in Neural Information Processing Systems. pages 2586–2594. Andrew Gordon Wilson, Zhiting Hu, Ruslan Salakhutdinov, and Eric P Xing. 2016b. Deep kernel learning. In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics. pages 370–378. Dongqiang Yang and David M. W. Powers. 2006. Verb similarity on the taxonomy of wordnet. In In the 3rd International WordNet Conference (GWC-06), Jeju Island, Korea. A Supplementary Material A.1 Derivation of Expected Likelihood Kernel We derive the form of expected likelihood kernel for Gaussian mixtures. Let f, g be Gaussian mixture distributions representing the words wf, wg. That is, f(x) = PK i=1 piN(x; µf,i, Σf,i) and g(x) = PK i=1 qiN(x; µg,i, Σg,i), PK i=1 pi = 1, and PK i=1 qi = 1. 1655 The expected likelihood kernel is given by Eθ(f, g) = Z K X i=1 piN(x; µf,i, Σf,i) ! · K X j=1 qjN(x; µg,j, Σg,j) ! dx = K X i=1 K X j=1 piqj Z N(x; µf,i, Σf,i) · N(x; µg,j, Σg,j) dx = K X i=1 K X j=1 piqjN(0; µf,i −µg,j, Σf,i + Σg,j) = K X i=1 K X j=1 piqjeξi,j where we note that R N(x; µi, Σi)N(x; µj, Σj) dx = N(0, µi −µj, Σi + Σj) (Vilnis and McCallum, 2014) and ξi,j is the log partial energy, given by equation 3. A.2 Implementation In this section we discuss practical details for training the proposed model. Reduction to Diagonal Covariance We use a diagonal Σ, in which case inverting the covariance matrix is trivial and computations are particularly efficient. Let df, dg denote the diagonal vectors of Σf, Σg The expression for ξi,j reduces to ξi,j = −1 2 D X r=1 log(dp r + dq r) −1 2 X  (µp,i −µq,j) ◦ 1 dp + dq ◦(µp,i −µq,j)  where ◦denotes element-wise multiplication. The spherical case which we use in all our experiments is similar since we simply replace a vector d with a single value. Optimization Constraint and Stability We optimize log d since each component of diagonal vector d is constrained to be positive. Similarly, we constrain the probability pi to be in [0, 1] and sum to 1 by optimizing over unconstrained scores si ∈(−∞, ∞) and using a softmax function to convert the scores to probability pi = esi PK j=1 esj . The loss computation can be numerically unstable if elements of the diagonal covariances are very small, due to the term log(df r + dg r) and 1 dq+dp . Therefore, we add a small constant ϵ = 10−4 so that df r + dg r and dq + dp becomes df r + dg r + ϵ and dq + dp + ϵ. In addition, we observe that ξi,j can be very small which would result in eξi,j ≈0 up to machine precision. In order to stabilize the computation in eq. 2, we compute its equivalent form log E(f, g) = ξi′,j′ + log K X j=1 K X i=1 piqjeξi,j−ξi′,j′ where ξi′,j′ = maxi,j ξi,j. Model Hyperparameters and Training Details In the loss function Lθ, we use a margin m = 1 and a batch size of 128. We initialize the word embeddings with a uniform distribution over [− q 3 D , q 3 D ] so that the expectation of variance is 1 and the mean is zero (LeCun et al., 1998). We initialize each dimension of the diagonal matrix (or a single value for spherical case) with a constant value v = 0.05. We also initialize the mixture scores si to be 0 so that the initial probabilities are equal among all K components. We use the threshold t = 10−5 for negative sampling, which is the recommended value for word2vec skip-gram on large datasets. We also use a separate output embeddings in addition to input embeddings, similar to word2vec implementation (Mikolov et al., 2013a,b). That is, each word has two sets of distributions qI and qO, each of which is a Gaussian mixture. For a given pair of word and context (w, c), we use the input distribution qI for w (input word) and the output distribution qO for context c (output word). We optimize the parameters of both qI and qO and use the trained input distributions qI as our final word representations. We use mini-batch asynchronous gradient descent with Adagrad (Duchi et al., 2011) which performs adaptive learning rate for each parameter. We also experiment with Adam (Kingma and Ba, 2014) which corrects the bias in adaptive gradient update of Adagrad and is proven very popular for most recent neural network models. However, we found that it is much slower than Adagrad (≈10 times). This is because the gradient computation of the model is relatively fast, so a complex gradient update algorithm such as Adam becomes the bottleneck in the optimization. Therefore, we choose to use Adagrad which allows us to better scale to large datasets. We use a linearly decreasing learning rate from 0.05 to 0.00001. 1656
2017
151
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1657–1668 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1152 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1657–1668 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1152 Enhanced LSTM for Natural Language Inference Qian Chen University of Science and Technology of China [email protected] Xiaodan Zhu National Research Council Canada [email protected] Zhenhua Ling University of Science and Technology of China [email protected] Si Wei iFLYTEK Research [email protected] Hui Jiang York University [email protected] Diana Inkpen University of Ottawa [email protected] Abstract Reasoning and inference are central to human and artificial intelligence. Modeling inference in human language is very challenging. With the availability of large annotated data (Bowman et al., 2015), it has recently become feasible to train neural network based inference models, which have shown to be very effective. In this paper, we present a new state-of-the-art result, achieving the accuracy of 88.6% on the Stanford Natural Language Inference Dataset. Unlike the previous top models that use very complicated network architectures, we first demonstrate that carefully designing sequential inference models based on chain LSTMs can outperform all previous models. Based on this, we further show that by explicitly considering recursive architectures in both local inference modeling and inference composition, we achieve additional improvement. Particularly, incorporating syntactic parsing information contributes to our best result—it further improves the performance even when added to the already very strong model. 1 Introduction Reasoning and inference are central to both human and artificial intelligence. Modeling inference in human language is notoriously challenging but is a basic problem towards true natural language understanding, as pointed out by MacCartney and Manning (2008), “a necessary (if not sufficient) condition for true natural language understanding is a mastery of open-domain natural language inference.” The previous work has included extensive research on recognizing textual entailment. Specifically, natural language inference (NLI) is concerned with determining whether a naturallanguage hypothesis h can be inferred from a premise p, as depicted in the following example from MacCartney (2009), where the hypothesis is regarded to be entailed from the premise. p: Several airlines polled saw costs grow more than expected, even after adjusting for inflation. h: Some of the companies in the poll reported cost increases. The most recent years have seen advances in modeling natural language inference. An important contribution is the creation of a much larger annotated dataset, the Stanford Natural Language Inference (SNLI) dataset (Bowman et al., 2015). The corpus has 570,000 human-written English sentence pairs manually labeled by multiple human subjects. This makes it feasible to train more complex inference models. Neural network models, which often need relatively large annotated data to estimate their parameters, have shown to achieve the state of the art on SNLI (Bowman et al., 2015, 2016; Munkhdalai and Yu, 2016b; Parikh et al., 2016; Sha et al., 2016; Paria et al., 2016). While some previous top-performing models use rather complicated network architectures to achieve the state-of-the-art results (Munkhdalai and Yu, 2016b), we demonstrate in this paper that enhancing sequential inference models based on chain 1657 models can outperform all previous results, suggesting that the potentials of such sequential inference approaches have not been fully exploited yet. More specifically, we show that our sequential inference model achieves an accuracy of 88.0% on the SNLI benchmark. Exploring syntax for NLI is very attractive to us. In many problems, syntax and semantics interact closely, including in semantic composition (Partee, 1995), among others. Complicated tasks such as natural language inference could well involve both, which has been discussed in the context of recognizing textual entailment (RTE) (Mehdad et al., 2010; Ferrone and Zanzotto, 2014). In this paper, we are interested in exploring this within the neural network frameworks, with the presence of relatively large training data. We show that by explicitly encoding parsing information with recursive networks in both local inference modeling and inference composition and by incorporating it into our framework, we achieve additional improvement, increasing the performance to a new state of the art with an 88.6% accuracy. 2 Related Work Early work on natural language inference has been performed on rather small datasets with more conventional methods (refer to MacCartney (2009) for a good literature survey), which includes a large bulk of work on recognizing textual entailment, such as (Dagan et al., 2005; Iftene and Balahur-Dobrescu, 2007), among others. More recently, Bowman et al. (2015) made available the SNLI dataset with 570,000 human annotated sentence pairs. They also experimented with simple classification models as well as simple neural networks that encode the premise and hypothesis independently. Rocktäschel et al. (2015) proposed neural attention-based models for NLI, which captured the attention information. In general, attention based models have been shown to be effective in a wide range of tasks, including machine translation (Bahdanau et al., 2014), speech recognition (Chorowski et al., 2015; Chan et al., 2016), image caption (Xu et al., 2015), and text summarization (Rush et al., 2015; Chen et al., 2016), among others. For NLI, the idea allows neural models to pay attention to specific areas of the sentences. A variety of more advanced networks have been developed since then (Bowman et al., 2016; Vendrov et al., 2015; Mou et al., 2016; Liu et al., 2016; Munkhdalai and Yu, 2016a; Rocktäschel et al., 2015; Wang and Jiang, 2016; Cheng et al., 2016; Parikh et al., 2016; Munkhdalai and Yu, 2016b; Sha et al., 2016; Paria et al., 2016). Among them, more relevant to ours are the approaches proposed by Parikh et al. (2016) and Munkhdalai and Yu (2016b), which are among the best performing models. Parikh et al. (2016) propose a relatively simple but very effective decomposable model. The model decomposes the NLI problem into subproblems that can be solved separately. On the other hand, Munkhdalai and Yu (2016b) propose much more complicated networks that consider sequential LSTM-based encoding, recursive networks, and complicated combinations of attention models, which provide about 0.5% gain over the results reported by Parikh et al. (2016). It is, however, not very clear if the potential of the sequential inference networks has been well exploited for NLI. In this paper, we first revisit this problem and show that enhancing sequential inference models based on chain networks can actually outperform all previous results. We further show that explicitly considering recursive architectures to encode syntactic parsing information for NLI could further improve the performance. 3 Hybrid Neural Inference Models We present here our natural language inference networks which are composed of the following major components: input encoding, local inference modeling, and inference composition. Figure 1 shows a high-level view of the architecture. Vertically, the figure depicts the three major components, and horizontally, the left side of the figure represents our sequential NLI model named ESIM, and the right side represents networks that incorporate syntactic parsing information in tree LSTMs. In our notation, we have two sentences a = (a1, . . . , aℓa) and b = (b1, . . . , bℓb), where a is a premise and b a hypothesis. The ai or bj ∈Rl is an embedding of l-dimensional vector, which can be initialized with some pre-trained word embeddings and organized with parse trees. The goal is to predict a label y that indicates the logic relationship between a and b. 3.1 Input Encoding We employ bidirectional LSTM (BiLSTM) as one of our basic building blocks for NLI. We first use it 1658 Figure 1: A high-level view of our hybrid neural inference networks. to encode the input premise and hypothesis (Equation (1) and (2)). Here BiLSTM learns to represent a word (e.g., ai) and its context. Later we will also use BiLSTM to perform inference composition to construct the final prediction, where BiLSTM encodes local inference information and its interaction. To bookkeep the notations for later use, we write as ¯ai the hidden (output) state generated by the BiLSTM at time i over the input sequence a. The same is applied to ¯bj: ¯ai = BiLSTM(a, i), ∀i ∈[1, . . . , ℓa], (1) ¯bj = BiLSTM(b, j), ∀j ∈[1, . . . , ℓb]. (2) Due to the space limit, we will skip the description of the basic chain LSTM and readers can refer to Hochreiter and Schmidhuber (1997) for details. Briefly, when modeling a sequence, an LSTM employs a set of soft gates together with a memory cell to control message flows, resulting in an effective modeling of tracking long-distance information/dependencies in a sequence. A bidirectional LSTM runs a forward and backward LSTM on a sequence starting from the left and the right end, respectively. The hidden states generated by these two LSTMs at each time step are concatenated to represent that time step and its context. Note that we used LSTM memory blocks in our models. We examined other recurrent memory blocks such as GRUs (Gated Recurrent Units) (Cho et al., 2014) and they are inferior to LSTMs on the heldout set for our NLI task. As discussed above, it is intriguing to explore the effectiveness of syntax for natural language inference; for example, whether it is useful even when incorporated into the best-performing models. To this end, we will also encode syntactic parse trees of a premise and hypothesis through treeLSTM (Zhu et al., 2015; Tai et al., 2015; Le and Zuidema, 2015), which extends the chain LSTM to a recursive network (Socher et al., 2011). Specifically, given the parse of a premise or hypothesis, a tree node is deployed with a tree-LSTM memory block depicted as in Figure 2 and computed with Equations (3–10). In short, at each node, an input vector xt and the hidden vectors of its two children (the left child hL t−1 and the right hR t−1) are taken in as the input to calculate the current node’s hidden vector ht. ct Cell × ht × f L t Left Forget Gate × f R t Right Forget Gate × it Input Gate ot Output Gate xt hL t−1 hR t−1 xt hR t−1 hL t−1 xt hR t−1 hL t−1 xt hR t−1 hL t−1 xt hR t−1 hL t−1 cL t−1 cR t−1 Figure 2: A tree-LSTM memory block. We describe the updating of a node at a high level with Equation (3) to facilitate references later in the paper, and the detailed computation is described in (4–10). Specifically, the input of a node is used to configure four gates: the input gate it, output gate ot, and the two forget gates fL t and fR t . The memory cell ct considers each child’s cell vector, cL t−1 and cR t−1, which are gated by the left forget 1659 gate fL t and right forget gate fR t , respectively. ht = TrLSTM(xt, hL t−1, hR t−1), (3) ht = ot ⊙tanh(ct), (4) ot = σ(Woxt + UL o hL t−1 + UR o hR t−1), (5) ct = f L t ⊙cL t−1 + f R t ⊙cR t−1 + it ⊙ut, (6) f L t = σ(Wfxt + ULL f hL t−1 + ULR f hR t−1), (7) f R t = σ(Wfxt + URL f hL t−1 + URR f hR t−1), (8) it = σ(Wixt + UL i hL t−1 + UR i hR t−1), (9) ut = tanh(Wcxt + UL c hL t−1 + UR c hR t−1), (10) where σ is the sigmoid function, ⊙is the elementwise multiplication of two vectors, and all W ∈ Rd×l, U ∈Rd×d are weight matrices to be learned. In the current input encoding layer, xt is used to encode a word embedding for a leaf node. Since a non-leaf node does not correspond to a specific word, we use a special vector x′ t as its input, which is like an unknown word. However, in the inference composition layer that we discuss later, the goal of using tree-LSTM is very different; the input xt will be very different as well—it will encode local inference information and will have values at all tree nodes. 3.2 Local Inference Modeling Modeling local subsentential inference between a premise and hypothesis is the basic component for determining the overall inference between these two statements. To closely examine local inference, we explore both the sequential and syntactic tree models that have been discussed above. The former helps collect local inference for words and their context, and the tree LSTM helps collect local information between (linguistic) phrases and clauses. Locality of inference Modeling local inference needs to employ some forms of hard or soft alignment to associate the relevant subcomponents between a premise and a hypothesis. This includes early methods motivated from the alignment in conventional automatic machine translation (MacCartney, 2009). In neural network models, this is often achieved with soft attention. Parikh et al. (2016) decomposed this process: the word sequence of the premise (or hypothesis) is regarded as a bag-of-word embedding vector and inter-sentence “alignment” (or attention) is computed individually to softly align each word to the content of hypothesis (or premise, respectively). While their basic framework is very effective, achieving one of the previous best results, using a pre-trained word embedding by itself does not automatically consider the context around a word in NLI. Parikh et al. (2016) did take into account the word order and context information through an optional distance-sensitive intra-sentence attention. In this paper, we argue for leveraging attention over the bidirectional sequential encoding of the input, as discussed above. We will show that this plays an important role in achieving our best results, and the intra-sentence attention used by Parikh et al. (2016) actually does not further improve over our model, while the overall framework they proposed is very effective. Our soft alignment layer computes the attention weights as the similarity of a hidden state tuple <¯ai, ¯bj> between a premise and a hypothesis with Equation (11). We did study more complicated relationships between ¯ai and ¯bj with multilayer perceptrons, but observed no further improvement on the heldout data. eij = ¯aT i ¯bj. (11) In the formula, ¯ai and ¯bj are computed earlier in Equations (1) and (2), or with Equation (3) when tree-LSTM is used. Again, as discussed above, we will use bidirectional LSTM and tree-LSTM to encode the premise and hypothesis, respectively. In our sequential inference model, unlike in Parikh et al. (2016) which proposed to use a function F(¯ai), i.e., a feedforward neural network, to map the original word representation for calculating eij, we instead advocate to use BiLSTM, which encodes the information in premise and hypothesis very well and achieves better performance shown in the experiment section. We tried to apply the F(.) function on our hidden states before computing eij and it did not further help our models. Local inference collected over sequences Local inference is determined by the attention weight eij computed above, which is used to obtain the local relevance between a premise and hypothesis. For the hidden state of a word in a premise, i.e., ¯ai (already encoding the word itself and its context), the relevant semantics in the hypothesis is identified and composed using eij, more specifically 1660 with Equation (12). ˜ai = ℓb X j=1 exp(eij) Pℓb k=1 exp(eik) ¯bj, ∀i ∈[1, . . . , ℓa], (12) ˜bj = ℓa X i=1 exp(eij) Pℓa k=1 exp(ekj) ¯ai, ∀j ∈[1, . . . , ℓb], (13) where ˜ai is a weighted summation of {¯bj}ℓb j=1. Intuitively, the content in {¯bj}ℓb j=1 that is relevant to ¯ai will be selected and represented as ˜ai. The same is performed for each word in the hypothesis with Equation (13). Local inference collected over parse trees We use tree models to help collect local inference information over linguistic phrases and clauses in this layer. The tree structures of the premise and hypothesis are produced by a constituency parser. Once the hidden states of a tree are all computed with Equation (3), we treat all tree nodes equally as we do not have further heuristics to discriminate them, but leave the attention weights to figure out their relationship. So, we use Equation (11) to compute the attention weights for all node pairs between a premise and hypothesis. This connects all words, constituent phrases, and clauses between the premise and hypothesis. We then collect the information between all the pairs with Equations (12) and (13) and feed them into the next layer. Enhancement of local inference information In our models, we further enhance the local inference information collected. We compute the difference and the element-wise product for the tuple <¯a, ˜a> as well as for <¯b, ˜b>. We expect that such operations could help sharpen local inference information between elements in the tuples and capture inference relationships such as contradiction. The difference and element-wise product are then concatenated with the original vectors, ¯a and ˜a, or ¯b and ˜b, respectively (Mou et al., 2016; Zhang et al., 2017). The enhancement is performed for both the sequential and the tree models. ma = [¯a; ˜a; ¯a −˜a; ¯a ⊙˜a], (14) mb = [¯b; ˜b; ¯b −˜b; ¯b ⊙˜b]. (15) This process could be regarded as a special case of modeling some high-order interaction between the tuple elements. Along this direction, we have also further modeled the interaction by feeding the tuples into feedforward neural networks and added the top layer hidden states to the above concatenation. We found that it does not further help the inference accuracy on the heldout dataset. 3.3 Inference Composition To determine the overall inference relationship between a premise and hypothesis, we explore a composition layer to compose the enhanced local inference information ma and mb. We perform the composition sequentially or in its parse context using BiLSTM and tree-LSTM, respectively. The composition layer In our sequential inference model, we keep using BiLSTM to compose local inference information sequentially. The formulas for BiLSTM are similar to those in Equations (1) and (2) in their forms so we skip the details, but the aim is very different here—they are used to capture local inference information ma and mb and their context here for inference composition. In the tree composition, the high-level formulas of how a tree node is updated to compose local inference is as follows: va,t = TrLSTM(F(ma,t), hL t−1, hR t−1), (16) vb,t = TrLSTM(F(mb,t), hL t−1, hR t−1). (17) We propose to control model complexity in this layer, since the concatenation we described above to compute ma and mb can significantly increase the overall parameter size to potentially overfit the models. We propose to use a mapping F as in Equation (16) and (17). More specifically, we use a 1-layer feedforward neural network with the ReLU activation. This function is also applied to BiLSTM in our sequential inference composition. Pooling Our inference model converts the resulting vectors obtained above to a fixed-length vector with pooling and feeds it to the final classifier to determine the overall inference relationship. We consider that summation (Parikh et al., 2016) could be sensitive to the sequence length and hence less robust. We instead suggest the following strategy: compute both average and max pooling, and concatenate all these vectors to form the final fixed length vector v. Our experiments show that this leads to significantly better results than summation. The final fixed length vector v is calculated 1661 as follows: va,ave = ℓa X i=1 va,i ℓa , va,max = ℓa max i=1 va,i, (18) vb,ave = ℓb X j=1 vb,j ℓb , vb,max = ℓb max j=1 vb,j, (19) v = [va,ave; va,max; vb,ave; vb,max]. (20) Note that for tree composition, Equation (20) is slightly different from that in sequential composition. Our tree composition will concatenate also the hidden states computed for the roots with Equations (16) and (17), which are not shown here. We then put v into a final multilayer perceptron (MLP) classifier. The MLP has a hidden layer with tanh activation and softmax output layer in our experiments. The entire model (all three components described above) is trained end-to-end. For training, we use multi-class cross-entropy loss. Overall inference models Our model can be based only on the sequential networks by removing all tree components and we call it Enhanced Sequential Inference Model (ESIM) (see the left part of Figure 1). We will show that ESIM outperforms all previous results. We will also encode parse information with tree LSTMs in multiple layers as described (see the right side of Figure 1). We train this model and incorporate it into ESIM by averaging the predicted probabilities to get the final label for a premise-hypothesis pair. We will show that parsing information complements very well with ESIM and further improves the performance, and we call the final model Hybrid Inference Model (HIM). 4 Experimental Setup Data The Stanford Natural Language Inference (SNLI) corpus (Bowman et al., 2015) focuses on three basic relationships between a premise and a potential hypothesis: the premise entails the hypothesis (entailment), they contradict each other (contradiction), or they are not related (neutral). The original SNLI corpus contains also “the other” category, which includes the sentence pairs lacking consensus among multiple human annotators. As in the related work, we remove this category. We used the same split as in Bowman et al. (2015) and other previous work. The parse trees used in this paper are produced by the Stanford PCFG Parser 3.5.3 (Klein and Manning, 2003) and they are delivered as part of the SNLI corpus. We use classification accuracy as the evaluation metric, as in related work. Training We use the development set to select models for testing. To help replicate our results, we publish our code1. Below, we list our training details. We use the Adam method (Kingma and Ba, 2014) for optimization. The first momentum is set to be 0.9 and the second 0.999. The initial learning rate is 0.0004 and the batch size is 32. All hidden states of LSTMs, tree-LSTMs, and word embeddings have 300 dimensions. We use dropout with a rate of 0.5, which is applied to all feedforward connections. We use pre-trained 300-D Glove 840B vectors (Pennington et al., 2014) to initialize our word embeddings. Out-of-vocabulary (OOV) words are initialized randomly with Gaussian samples. All vectors including word embedding are updated during training. 5 Results Overall performance Table 1 shows the results of different models. The first row is a baseline classifier presented by Bowman et al. (2015) that considers handcrafted features such as BLEU score of the hypothesis with respect to the premise, the overlapped words, and the length difference between them, etc. The next group of models (2)-(7) are based on sentence encoding. The model of Bowman et al. (2016) encodes the premise and hypothesis with two different LSTMs. The model in Vendrov et al. (2015) uses unsupervised “skip-thoughts” pre-training in GRU encoders. The approach proposed by Mou et al. (2016) considers tree-based CNN to capture sentence-level semantics, while the model of Bowman et al. (2016) introduces a stack-augmented parser-interpreter neural network (SPINN) which combines parsing and interpretation within a single tree-sequence hybrid model. The work by Liu et al. (2016) uses BiLSTM to generate sentence representations, and then replaces average pooling with intra-attention. The approach proposed by Munkhdalai and Yu (2016a) presents a memory augmented neural network, neural semantic encoders (NSE), to encode sentences. The next group of methods in the table, models 1https://github.com/lukecq1231/nli 1662 Model #Para. Train Test (1) Handcrafted features (Bowman et al., 2015) 99.7 78.2 (2) 300D LSTM encoders (Bowman et al., 2016) 3.0M 83.9 80.6 (3) 1024D pretrained GRU encoders (Vendrov et al., 2015) 15M 98.8 81.4 (4) 300D tree-based CNN encoders (Mou et al., 2016) 3.5M 83.3 82.1 (5) 300D SPINN-PI encoders (Bowman et al., 2016) 3.7M 89.2 83.2 (6) 600D BiLSTM intra-attention encoders (Liu et al., 2016) 2.8M 84.5 84.2 (7) 300D NSE encoders (Munkhdalai and Yu, 2016a) 3.0M 86.2 84.6 (8) 100D LSTM with attention (Rocktäschel et al., 2015) 250K 85.3 83.5 (9) 300D mLSTM (Wang and Jiang, 2016) 1.9M 92.0 86.1 (10) 450D LSTMN with deep attention fusion (Cheng et al., 2016) 3.4M 88.5 86.3 (11) 200D decomposable attention model (Parikh et al., 2016) 380K 89.5 86.3 (12) Intra-sentence attention + (11) (Parikh et al., 2016) 580K 90.5 86.8 (13) 300D NTI-SLSTM-LSTM (Munkhdalai and Yu, 2016b) 3.2M 88.5 87.3 (14) 300D re-read LSTM (Sha et al., 2016) 2.0M 90.7 87.5 (15) 300D btree-LSTM encoders (Paria et al., 2016) 2.0M 88.6 87.6 (16) 600D ESIM 4.3M 92.6 88.0 (17) HIM (600D ESIM + 300D Syntactic tree-LSTM) 7.7M 93.5 88.6 Table 1: Accuracies of the models on SNLI. Our final model achieves the accuracy of 88.6%, the best result observed on SNLI, while our enhanced sequential encoding model attains an accuracy of 88.0%, which also outperform the previous models. (8)-(15), are inter-sentence attention-based model. The model marked with Rocktäschel et al. (2015) is LSTMs enforcing the so called word-by-word attention. The model of Wang and Jiang (2016) extends this idea to explicitly enforce word-by-word matching between the hypothesis and the premise. Long short-term memory-networks (LSTMN) with deep attention fusion (Cheng et al., 2016) link the current word to previous words stored in memory. Parikh et al. (2016) proposed a decomposable attention model without relying on any word-order information. In general, adding intra-sentence attention yields further improvement, which is not very surprising as it could help align the relevant text spans between premise and hypothesis. The model of Munkhdalai and Yu (2016b) extends the framework of Wang and Jiang (2016) to a full n-ary tree model and achieves further improvement. Sha et al. (2016) proposes a special LSTM variant which considers the attention vector of another sentence as an inner state of LSTM. Paria et al. (2016) use a neural architecture with a complete binary tree-LSTM encoders without syntactic information. The table shows that our ESIM model achieves an accuracy of 88.0%, which has already outperformed all the previous models, including those using much more complicated network architectures (Munkhdalai and Yu, 2016b). We ensemble our ESIM model with syntactic tree-LSTMs (Zhu et al., 2015) based on syntactic parse trees and achieve significant improvement over our best sequential encoding model ESIM, attaining an accuracy of 88.6%. This shows that syntactic tree-LSTMs complement well with ESIM. Model Train Test (17) HIM (ESIM + syn.tree) 93.5 88.6 (18) ESIM + tree 91.9 88.2 (16) ESIM 92.6 88.0 (19) ESIM - ave./max 92.9 87.1 (20) ESIM - diff./prod. 91.5 87.0 (21) ESIM - inference BiLSTM 91.3 87.3 (22) ESIM - encoding BiLSTM 88.7 86.3 (23) ESIM - P-based attention 91.6 87.2 (24) ESIM - H-based attention 91.4 86.5 (25) syn.tree 92.9 87.8 Table 2: Ablation performance of the models. Ablation analysis We further analyze the major components that are of importance to help us achieve good performance. From the best model, we first replace the syntactic tree-LSTM with the full tree-LSTM without encoding syntactic parse information. More specifically, two adjacent words in a sentence are merged to form a parent node, and 1663 1 3 5 7 21 23 25 27 29 standing 28 while 26 newspaper 24 a 22 reading 8 16 18 20 jeans 19 blue 17 a 9 15 and 10 12 14 shirt 13 white 11 a 6 wearing 4 man 2 A (a) Binarized constituency tree of premise 1 5 17 . 6 8 12 14 16 newspaper 15 a 13 reading 9 11 down 10 sitting 7 is 2 4 man 3 A (b) Binarized constituency tree of hypothesis (c) Normalized attention weights of tree-LSTM (d) Input gate of tree-LSTM in inference composition (l2-norm) (e) Input gate of BiLSTM in inference composition (l2-norm) (f) Normalized attention weights of BiLSTM Figure 3: An example for analysis. Subfigures (a) and (b) are the constituency parse trees of the premise and hypothesis, respectively. “-” means a non-leaf or a null node. Subfigures (c) and (f) are attention visualization of the tree model and ESIM, respectively. The darker the color, the greater the value. The premise is on the x-axis and the hypothesis is on y-axis. Subfigures (d) and (e) are input gates’ l2-norm of tree-LSTM and BiLSTM in inference composition, respectively. this process continues and results in a full binary tree, where padding nodes are inserted when there are no enough leaves to form a full tree. Each tree node is implemented with a tree-LSTM block (Zhu et al., 2015) same as in model (17). Table 2 shows that with this replacement, the performance drops to 88.2%. Furthermore, we note the importance of the layer performing the enhancement for local inference information in Section 3.2 and the pooling layer in inference composition in Section 3.3. Table 2 suggests that the NLI task seems very sensitive to the 1664 layers. If we remove the pooling layer in inference composition and replace it with summation as in Parikh et al. (2016), the accuracy drops to 87.1%. If we remove the difference and elementwise product from the local inference enhancement layer, the accuracy drops to 87.0%. To provide some detailed comparison with Parikh et al. (2016), replacing bidirectional LSTMs in inference composition and also input encoding with feedforward neural network reduces the accuracy to 87.3% and 86.3% respectively. The difference between ESIM and each of the other models listed in Table 2 is statistically significant under the one-tailed paired t-test at the 99% significance level. The difference between model (17) and (18) is also significant at the same level. Note that we cannot perform significance test between our models with the other models listed in Table 1 since we do not have the output of the other models. If we remove the premise-based attention from ESIM (model 23), the accuracy drops to 87.2% on the test set. The premise-based attention means when the system reads a word in a premise, it uses soft attention to consider all relevant words in hypothesis. Removing the hypothesis-based attention (model 24) decrease the accuracy to 86.5%, where hypothesis-based attention is the attention performed on the other direction for the sentence pairs. The results show that removing hypothesisbased attention affects the performance of our model more, but removing the attention from the other direction impairs the performance too. The stand-alone syntactic tree-LSTM model achieves an accuracy of 87.8%, which is comparable to that of ESIM. We also computed the oracle score of merging syntactic tree-LSTM and ESIM, which picks the right answer if either is right. Such an oracle/upper-bound accuracy on test set is 91.7%, which suggests how much tree-LSTM and ESIM could ideally complement each other. As far as the speed is concerned, training tree-LSTM takes about 40 hours on Nvidia-Tesla K40M and ESIM takes about 6 hours, which is easily extended to larger scale of data. Further analysis We showed that encoding syntactic parsing information helps recognize natural language inference—it additionally improves the strong system. Figure 3 shows an example where tree-LSTM makes a different and correct decision. In subfigure (d), the larger values at the input gates on nodes 9 and 10 indicate that those nodes are important in making the final decision. We observe that in subfigure (c), nodes 9 and 10 are aligned to node 29 in the premise. Such information helps the system decide that this pair is a contradiction. Accordingly, in subfigure (e) of sequential BiLSTM, the words sitting and down do not play an important role for making the final decision. Subfigure (f) shows that sitting is equally aligned with reading and standing and the alignment for word down is not that useful. 6 Conclusions and Future Work We propose neural network models for natural language inference, which achieve the best results reported on the SNLI benchmark. The results are first achieved through our enhanced sequential inference model, which outperformed the previous models, including those employing more complicated network architectures, suggesting that the potential of sequential inference models have not been fully exploited yet. Based on this, we further show that by explicitly considering recursive architectures in both local inference modeling and inference composition, we achieve additional improvement. Particularly, incorporating syntactic parsing information contributes to our best result: it further improves the performance even when added to the already very strong model. Future work interesting to us includes exploring the usefulness of external resources such as WordNet and contrasting-meaning embedding (Chen et al., 2015) to help increase the coverage of wordlevel inference relations. Modeling negation more closely within neural network frameworks (Socher et al., 2013; Zhu et al., 2014) may help contradiction detection. Acknowledgments The first and the third author of this paper were supported in part by the Science and Technology Development of Anhui Province, China (Grants No. 2014z02006), the Fundamental Research Funds for the Central Universities (Grant No. WK2350000001) and the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB02070006). 1665 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473. http://arxiv.org/abs/1409.0473. Samuel Bowman, Gabor Angeli, Christopher Potts, and D. Christopher Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 632–642. https://doi.org/10.18653/v1/D15-1075. Samuel Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, D. Christopher Manning, and Christopher Potts. 2016. A fast unified model for parsing and sentence understanding. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 1466–1477. https://doi.org/10.18653/v1/P16-1139. William Chan, Navdeep Jaitly, Quoc V. Le, and Oriol Vinyals. 2016. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2016, Shanghai, China, March 20-25, 2016. IEEE, pages 4960–4964. https://doi.org/10.1109/ICASSP.2016.7472621. Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, and Hui Jiang. 2016. Distraction-based neural networks for modeling document. In Subbarao Kambhampati, editor, Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016, New York, NY, USA, 9-15 July 2016. IJCAI/AAAI Press, pages 2754–2760. http://www.ijcai.org/Abstract/16/391. Zhigang Chen, Wei Lin, Qian Chen, Xiaoping Chen, Si Wei, Hui Jiang, and Xiaodan Zhu. 2015. Revisiting word embedding for contrasting meaning. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, pages 106–115. https://doi.org/10.3115/v1/P15-1011. Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long short-term memory-networks for machine reading. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 551–561. http://aclweb.org/anthology/D16-1053. Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. In Dekai Wu, Marine Carpuat, Xavier Carreras, and Eva Maria Vecchi, editors, Proceedings of SSST@EMNLP 2014, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, Doha, Qatar, 25 October 2014. Association for Computational Linguistics, pages 103– 111. http://aclweb.org/anthology/W/W14/W144012.pdf. Jan Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. 2015. Attention-based models for speech recognition. In Corinna Cortes, Neil D. Lawrence, Daniel D. Lee, Masashi Sugiyama, and Roman Garnett, editors, Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada. pages 577–585. http://papers.nips.cc/paper/5847attention-based-models-for-speech-recognition. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The PASCAL recognising textual entailment challenge. In Machine Learning Challenges, Evaluating Predictive Uncertainty, Visual Object Classification and Recognizing Textual Entailment, First PASCAL Machine Learning Challenges Workshop, MLCW 2005, Southampton, UK, April 11-13, 2005, Revised Selected Papers. pages 177–190. Lorenzo Ferrone and Massimo Fabio Zanzotto. 2014. Towards syntax-aware compositional distributional semantic models. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers. Dublin City University and Association for Computational Linguistics, pages 721–730. http://aclweb.org/anthology/C141068. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation 9(8):1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735. Adrian Iftene and Alexandra Balahur-Dobrescu. 2007. Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, Association for Computational Linguistics, chapter Hypothesis Transformation and Semantic Variability Rules Used in Recognizing Textual Entailment, pages 125– 130. http://aclweb.org/anthology/W07-1421. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR abs/1412.6980. http://arxiv.org/abs/1412.6980. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics. http://aclweb.org/anthology/P031054. Phong Le and Willem Zuidema. 2015. Compositional distributional semantics with long short term memory. In Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics. Association for Computational Linguistics, pages 10–19. https://doi.org/10.18653/v1/S15-1002. 1666 Yang Liu, Chengjie Sun, Lei Lin, and Xiaolong Wang. 2016. Learning natural language inference using bidirectional LSTM model and inner-attention. CoRR abs/1605.09090. http://arxiv.org/abs/1605.09090. Bill MacCartney. 2009. Natural Language Inference. Ph.D. thesis, Stanford University. Bill MacCartney and Christopher D. Manning. 2008. Modeling semantic containment and exclusion in natural language inference. In Proceedings of the 22Nd International Conference on Computational Linguistics - Volume 1. Association for Computational Linguistics, Stroudsburg, PA, USA, COLING ’08, pages 521–528. http://dl.acm.org/citation.cfm?id=1599081.1599147. Yashar Mehdad, Alessandro Moschitti, and Massimo Fabio Zanzotto. 2010. Syntactic/semantic structures for textual entailment recognition. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, pages 1020– 1028. http://aclweb.org/anthology/N10-1146. Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui Yan, and Zhi Jin. 2016. Natural language inference by tree-based convolution and heuristic matching. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, pages 130–136. https://doi.org/10.18653/v1/P16-2022. Tsendsuren Munkhdalai and Hong Yu. 2016a. Neural semantic encoders. CoRR abs/1607.04315. http://arxiv.org/abs/1607.04315. Tsendsuren Munkhdalai and Hong Yu. 2016b. Neural tree indexers for text understanding. CoRR abs/1607.04492. http://arxiv.org/abs/1607.04492. Biswajit Paria, K. M. Annervaz, Ambedkar Dukkipati, Ankush Chatterjee, and Sanjay Podder. 2016. A neural architecture mimicking humans end-to-end for natural language inference. CoRR abs/1611.04741. http://arxiv.org/abs/1611.04741. Ankur Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 2249–2255. http://aclweb.org/anthology/D16-1244. Barbara Partee. 1995. Lexical semantics and compositionality. Invitation to Cognitive Science 1:311–360. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, pages 1532–1543. https://doi.org/10.3115/v1/D14-1162. Tim Rocktäschel, Edward Grefenstette, Karl Moritz Hermann, Tomás Kociský, and Phil Blunsom. 2015. Reasoning about entailment with neural attention. CoRR abs/1509.06664. http://arxiv.org/abs/1509.06664. Alexander Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 379–389. https://doi.org/10.18653/v1/D15-1044. Lei Sha, Baobao Chang, Zhifang Sui, and Sujian Li. 2016. Reading and thinking: Re-read LSTM unit for textual entailment recognition. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. The COLING 2016 Organizing Committee, pages 2870–2879. http://aclweb.org/anthology/C161270. Richard Socher, Cliff Chiung-Yu Lin, Andrew Y. Ng, and Christopher D. Manning. 2011. Parsing natural scenes and natural language with recursive neural networks. In Lise Getoor and Tobias Scheffer, editors, Proceedings of the 28th International Conference on Machine Learning, ICML 2011, Bellevue, Washington, USA, June 28 - July 2, 2011. Omnipress, pages 129–136. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, D. Christopher Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1631–1642. http://aclweb.org/anthology/D13-1170. Sheng Kai Tai, Richard Socher, and D. Christopher Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, pages 1556–1566. https://doi.org/10.3115/v1/P15-1150. Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. 2015. Order-embeddings of images and language. CoRR abs/1511.06361. http://arxiv.org/abs/1511.06361. Shuohang Wang and Jing Jiang. 2016. Learning natural language inference with LSTM. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 1442– 1451. https://doi.org/10.18653/v1/N16-1170. 1667 Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015. pages 2048–2057. http://jmlr.org/proceedings/papers/v37/xuc15.html. Junbei Zhang, Xiaodan Zhu, Qian Chen, Lirong Dai, Si Wei, and Hui Jiang. 2017. Exploring question understanding and adaptation in neural-network-based question answering. CoRR abs/arXiv:1703.04617v2. https://arxiv.org/abs/1703.04617. Xiaodan Zhu, Hongyu Guo, Saif Mohammad, and Svetlana Kiritchenko. 2014. An empirical study on the effect of negation words on sentiment. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 304–313. https://doi.org/10.3115/v1/P141029. Xiaodan Zhu, Parinaz Sobhani, and Hongyu Guo. 2015. Long short-term memory over recursive structures. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015. pages 1604–1612. http://jmlr.org/proceedings/papers/v37/zhub15.html. 1668
2017
152
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1669–1678 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1153 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1669–1678 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1153 Linguistic analysis of differences in portrayal of movie characters Anil Ramakrishna1, Victor R. Mart´ınez1, Nikolaos Malandrakis1, Karan Singla1, and Shrikanth Narayanan1,2 1Department of Computer Science 2Department of Electrical Engineering University of Southern California, Los Angeles, USA {akramakr, victorrm, malandra, singlak}@usc.edu, [email protected] Abstract We examine differences in portrayal of characters in movies using psycholinguistic and graph theoretic measures computed directly from screenplays. Differences are examined with respect to characters’ gender, race, age and other metadata. Psycholinguistic metrics are extrapolated to dialogues in movies using a linear regression model built on a set of manually annotated seed words. Interesting patterns are revealed about relationships between genders of production team and the gender ratio of characters. Several correlations are noted between gender, race, age of characters and the linguistic metrics. 1 Introduction Movies are often described as having the power to influence individual beliefs and values. In (Cape, 2003), the authors assert movies’ influence in both creating new thinking patterns in previously unexplored social phenomena, especially in children, as well as their ability to update an individual’s existing social boundaries based on what is shown on screen as the ”norm”. Some authors claim the inverse (Wedding and Boyd, 1999): that movies reflect existing cultural values of the society, adding weight to their ability in influencing individual beliefs of what is accepted as the norm. As a result, they are studied in multiple disciplines to analyze their influence. Movies are particularly scrutinized in aspects involving negative stereotyping (Cape, 2003; Dimnik and Felton, 2006; Ter Bogt et al., 2010; Hedley, 1994) since this may introduce questionable beliefs in viewers. Negative stereotyping is believed to impact society in multiple aspects such as self-induced undermining of ability (Davies et al., 2005) as well as causing forms of prejudice that can impact leadership or employment prospects (Eagly and Karau, 2002; Niven, 2006). Studies in analyzing stereotyping in movies typically rely on collecting manual annotations on a small set of movies on which hypotheses tests are conducted (Behm-Morawitz and Mastro, 2008; Benshoff and Griffin, 2011; Hooks, 2009). In this work, we present large scale automated analyses of movie characters using language used in dialogs to study stereotyping along factors such as gender, race and age. Language use has been long known as a strong indicator of the speaker’s psychological and emotional state (Gottschalk and Gleser, 1969) and is well studied in a number of applications such as automatic personality detection (Mairesse et al., 2007) and psychotherapy (Xiao et al., 2015; Pennebaker et al., 2003). Computational analysis of language has been particularly popular thanks to advancements in computing and the ease of conducting large scale analysis of text on computers (Pennebaker et al., 2015). To perform our analysis, we construct a new movie screenplay corpus 1 that includes nearly 1000 movie scripts obtained from the Internet. For each movie in the corpus, we obtain additional metadata such as cast, genre, writers and directors, and also collect actor level demographic information such as gender, race and age. We use two kinds of measures in our analyses: (i) linguistic metrics that capture various psychological constructs and behaviors, estimated using dialogues from the screenplay; and (ii) graph theoretic metrics estimated from character network graphs, which are constructed to model intercharacter interactions in the movie. The linguistic metrics include psycholinguistic normatives, 1http://sail.usc.edu/mica/text_corpus_ release.php 1669 which provide word level scores on a numeric scale which are then aggregated at the dialog level, and metrics from the Linguistic Inquiry and Word Counts tool (LIWC) which capture usage of well studied stereotyping dimensions such as sexuality. We estimate centrality metrics from the character network graphs to measure relative importance of the different characters, which are analyzed with respect to the different factors of gender, race and age. The main contributions of this work are as follows: (i) we present a scalable analysis of differences in portrayal of various character subgroups in movies using their language use, (ii) we construct a new corpus with detailed annotations for our analysis and (iii) we highlight several differences in the portrayal of characters along factors such as race, age and gender. The rest of the paper is organized as follows: in section 2 we describe related work. We explain the data collection process in section 3 and experimental procedure in section 4. We explain results in section 5 and conclude in section 6. 2 Related work Previous works in studying representation in movies largely focus on relative frequencies, particularly on character gender. In (Smith et al., 2014), the authors studied 120 movies from around the globe which were manually annotated to capture information about character gender, age, careers, writer gender and director gender. However, since the annotations are done manually, collecting information on new movies is a laborious process. We avoided this by estimating the metadata computationally, enabling us to scale up efficiently. Automated analyses of movies using computational techniques to analyze representation has recently gained some attention. In (NYFA, 2013; Polygraph, 2016), the authors examine differences in relative frequency of female characters and note considerable disparities in gender ratio in these movies. However, the analyses there too are limited to comparing relative frequencies. Our work is closest to (Ramakrishna et al., 2015) where the authors study difference in language used in movies across genders, but their analysis is one dimensional. In our work we perform fine grained comparisons of character portrayal using multiple language based metrics along factors such as gender, race and age on a newly created corpus. 3 Data 3.1 Raw screenplay We fetch movie screenplay files from two primary sources: imsdb (IMSDb, 2017) and daily scripts (DailyScript, 2017). In total, we retrieved 1547 movies. After removing duplicates we retain 1434 raw screenplay files, of which 489 were corrupted or empty leaving us with 945 usable screenplays. Tables 1, 3 and 4 list statistics about the corpus. 3.2 Script parser The screenplay files are formatted in human readable format and include dialogues tagged with character names along with auxiliary information of the scene such as shot location (interior/exterior), character placement and scene context. The screenplays are from a diverse set of writers and include a significant amount of noise and inconsistencies in their structure. To extract the relevant information, we developed a text parser 2 that accepts raw script files and outputs utterances along with character names. We ignore scene context information and primarily focus on spoken dialogues to study language usage in the movies. 3.3 Movie and character meta-data For each parsed movie, we fetch relevant metadata such as year of release, directors, writers, and producers from the Internet Movie Database (IMDb, 2017). Since most screenplays are drafts and subject to revisions such as changes in character names, matching them to an entry from IMDb is nontrivial. We first start with a list of all movies that have a close match with the screenplay name; given this list of potential matches we compute name alignment scores for each entry as the percentage of character names from the script found online. The character names are mapped using term frequency-inverse document frequency (TFIDF) to compute the name alignment score following (Cohen et al., 2003). Finally, the entry with highest alignment score is chosen. For all actors listed in the aligned result, we collect their age, gender and race as detailed below. 2https://bitbucket.org/anil_ ramakrishna/scriptparser 1670 3.3.1 Gender Given the names of actors and other members of production team found in a movie, we use a name based gender classifier to predict their gender information. Table 4 lists statistics on gender ratios for the production team in the corpus. Femaleto-male ratios were found in close agreement with previous works (Smith et al., 2014). As mentioned above, several screenplays get revised during production. In particular character names get changed, sometimes even gender. As a result, some characters may not be aligned to the correct entry from IMDb. In addition, digitized screenplays sometime include significant noise thanks to optical character recognition errors, leading to character names failing to align with entries from IMDb. To correct these, we perform manual cleanup of all the movie alignments, fix incorrect gender maps, and manually force match movies if they’re mapped to the wrong IMDb entry. 3.3.2 Age We also extract age for each actor to study possible age related biases in movies. We include age in our analysis since studies report preferential biases with age in employment particularly when combined with gender (Lincoln and Allen, 2004). In addition, there may be biases in portrayal of specific age groups when combined with gender and race. For each actor in the mapped IMDb entry, we collect his/her birthday information. We subtract the movie production year obtained also from IMDb from the actor’s birthday to get an estimate of the actor’s age during the movie’s production. We note however that the age obtained in this manner may be different from the portrayed age of the character. To account for this we bin the actors into fifteen year age groups before our analysis, since its generally unlikely to have actors further than fifteen years from their portrayed age. 3.3.3 Race We parse ethnicity information from the website (ethnicelebs.com, 2017), which includes ethnicity for approximately 8000 different actors. The information obtained from this site is primarily submitted by independent users, and exhibits significant amount of variation among the possible ethnicities with about 750 different unique ethnicity types. Since we are more specifically interested in Race # Actors Percentage African 585 7.44% Caucasian 6539 83.24% East Asian 73 0.93% Latino/Hispanic 161 2.05% Native American 15 0.19% Pacific Islander 5 0.063% South Asian 43 0.547% Mixed 434 5.52% Table 1: Racial categories racial representations, we map the ethnicity types to race using Amazon Mechanical Turk (MTurk). We use a modified version of the racial categories from the US census which are listed in Table 1 along with frequency of actors from each racial category in our corpus. The ethnicities obtained from the site above primarily cover major actors with a fan base with no information for several actors who play minor roles. We annotate racial information for nearly 2000 such actors using MTurk with two annotations for each actor, manually correcting nearly 400 cases in which the annotators disagreed. 4 Experiments 4.1 Character portrayal using language To study differences in portrayal of characters, we use two different metrics: psycholinguistic normatives, which are designed to capture the underlying emotional state of the speaker; and LIWC metrics, which provide a measure of the speaker’s affinity to different social and physical constructs such as religion and death. We explain these two metrics in detail below. 4.1.1 Psycholinguistic normatives Psycholinguistic normatives provide a measure of various emotional and psychological constructs of the speaker, such as arousal, valence, concreteness, intelligibility, etc. and are computed entirely from language usage. They are relatively easy to compute, provide reliable indicators of the above constructs, and have been used in a variety of tasks in natural language processing such as information retrieval (Tanaka et al., 2013), sentiment analysis (Nielsen, 2011), text based personality prediction (Mairesse et al., 2007) and opinion mining. The numeric ratings are typically extrapolated from a small set of keywords which are annotated 1671 by psychologists. Manual annotations of word ratings is a laborious process and is hence limited to a few thousand words (Clark and Paivio, 2004). Automatic extrapolation of these ratings to words not covered by the manual annotations can be done using structured databases which provide relationships between words such as synonymy and hyponymy (Liu et al., 2014), or using context based semantic similarity. In this work, we use the model described in (Malandrakis and Narayanan, 2015) where the authors use linear regression to compute normative scores for an input word w based on its similarity to a set of concept words si. r(w) = θ0 + X i θi · sim(w, si) (1) where, r(w) is the computed normative score for word w, θ0 and θi are regression coefficients and sim is similarity between the given word w and concept words si. The concept words can either be hand crafted suitably for the domain or chosen automatically from data. Similar to (Malandrakis and Narayanan, 2015), we create training data by posing queries on the Yahoo search engine from words of the aspell spell checker of which top 500 previews are collected from each query. From this corpus, the top 10000 most frequent words with atleast 3 characters were were used as concept words in extrapolation of all the norms. The linear regression model is trained using normative ratings for the manually annotated words by computing their similarity to the concept words. The similarity function sim is the cosine of binary context vectors with window size 1. The computed normatives are in the range [−1, 1]. The psycholinguistic normatives used in this work are listed in Table 2. Valence is the degree of positive or negative emotion evoked by the word. Arousal is a measure of excitement in the speaker. Valence and arousal combined are common indicators used to map emotions. Age of Acquisition refers to the average age at which the word is learned and it denotes sophistication of language use. Gender Ladenness is a measure of masculine or feminine association of a word. 10 fold Cross Validation tests are performed on the normative scores predicted by the regression model given by equation 1. Correlation coefficients of the selected normatives with the manual annotations are as follows: Arousal (0.7), Valence (0.88), Age of Acquisition (0.86) and Gender Ladenness (0.8). The high correlations render confidence in the psycholinguistic models. In our experiments, the normative scores are computed on content words from each dialog. We filter out all words other than nouns, verbs, adjectives and adverbs. Word level scores are aggregated at the dialog level using arithmetic mean. 4.1.2 Linguistic inquiry and word counts (LIWC) LIWC is a text processing application that processes raw text and outputs percentage of words from the text that belong to linguistic, affective, perceptual and other dimensions. It operates by maintaining a diverse set of dictionaries of words each belonging to a unique dimension. Input texts are processed word by word; each word is searched in the internal dictionaries and the corresponding counter is incremented if a word is found in that dictionary. Finally, percentage of words from the input text belonging to the different dimensions are returned. For our experiments, we treat each utterance in the movie as a unique document and obtain values for the LIWC metrics. Table 2 lists the metrics used in our experiments. 4.2 Character network analytics In order to study representation of the different subgroups as major characters in movies, we construct a network of interaction between characters using which we compute importance measures for each character. From each movie script, we construct an undirected and unweighted graph where nodes represent characters. We place an edge eab if two characters A and B interact at least once in the movie. For our experiments we assume interaction between A and B if there is at least one scene in which one speaks right after another. This graph creation method based on scene cooccurrence is similar to the approach used in (Beveridge and Shan, 2016). We estimate different measures of a node’s importance within the character network and use it as proxy for the character’s importance. We employ two types of centralities: betweenness centrality, the number of shortest paths that go through the node, and degree centrality, which is the number of edges incident on a node. These centrality measurements have been previously used in the con1672 Psycholinguistic norms Valence, Arousal, Age of Acquisition, Gender Ladenness LIWC metrics Achievement, Religion, Death, Sexual, Swear Table 2: Psycholinguistic Normatives and LIWC metrics used in analysis male female total # Characters 4899 2008 6907 # Dialogues 375711 154897 530608 Number of movies 945 Table 3: Character statistics role male female total Writers 1326 169 1495 Directors 544 46 590 Producers 2866 870 3736 Casting Directors 135 275 410 Distributing Companies 2701 Table 4: Production team statistics text of books, films and comics (Beveridge and Shan, 2016; Bonato et al., 2016; Alberich et al., 2002; Ribeiro et al., 2016). 5 Results We study differences in various subgroups along multiple facets. We first report results on differences in character ratios from each subgroup since this has implications on employment and can have social-economic effects (Niven, 2006). We next use psycholinguistic normatives and LIWC metrics described in the previous section to study differences in character portrayal along the primary markers: age, gender and race. We finally use the graph theoretic centrality measures to estimate characters’ importance and analyze differences among the different subgroups. Since we are interested in character level analytics, we treat all utterances from the character as a single document to compute the aggregate language metrics. We perform all our experiments using non-parametric statistical tests since the data fails to satisfy preconditions such as normality and homoscedasticity required for parametric tests such as ANOVA. 5.1 Difference in relative frequency of subgroups We first filter our characters with unknown gender/race/age leaving us with 6907 characters in tocharacter genders f (28.9%) m (71.1%) f 249 (41.2%) 356 (58.8%) m 1541 (27.6%) 4040 (72.4%) (a) writers gender f 114 (39.3%) 176 (60.7%) m 1676 (28.4%) 4220 (71.6%) (b) directors gender f 1374 (29.1%) 3350 (70.9%) m 416 (28.5%) 1046 (71.5%) (c) casting directors gender Table 5: Contingency tables for character gender v/s writers, directors and casting directors’ gender; f: female and m: male; each cell gives frequency of character gender for that column and production member gender for that row, numbers in braces indicate row wise proportion of character gender tal. Table 3 lists the number of characters and dialogues from each gender. As noted in previous studies, the ratio is considerably skewed with male actors having nearly twice as many roles and dialogues compared to female actors. Table 4 lists relative frequency among male and female members of the production team. Table 1 lists the percentage of actors belonging to different racial categories in the corpus. We perform chi-squared tests between character gender and gender of production team members who are most likely to influence characters gender: writers, directors and casting directors. Table 5 shows contingency tables with gender frequencies for each of these cases along with percentages. Note we filter out nearly 100 movies for this test in which the gender of the production team members was unknown. Of the three tests we perform, character gender distributions for writer and director genders are significantly different from the overall character gender distribution (p < 10−10 and p < 10−4 respectively; α = 0.05). In particular, female writers and directors appear to produce movies with relatively balanced gender proportions (still slightly skewed towards the male side) compared to male writers 1673 0 100 200 300 400 500 600 female < 10−5 0 1 2 3 4 5 0.0081 0 5 10 15 20 25 30 35 40 45< 10−5 0 1 2 3 4 5 0.12 0 10 20 30 40 50 60 70 80 0.0034 0 1 2 3 4 5 6 0.12 0 5 10 15 20 25 0.09 0 1 2 3 4 5 * caucasian 0 100 200 300 400 500 600 male eastasian 0 1 2 3 4 5 mixed 0 5 10 15 20 25 30 35 40 45 nativeamerican 0 1 2 3 4 5 african 0 10 20 30 40 50 60 70 80 southasian 0 1 2 3 4 5 6 latino 0 5 10 15 20 25 pacificislander 0 1 2 3 4 5 Figure 1: Histogram of age for actors belonging to different gender and racial categories with p-values on top; significant values at α = 0.05 are highlighted; *: no test performed since the female group is empty and directors. Casting directors however appear to have no influence on gender of the characters. Studies report potential biases in actor employment with age (Lincoln and Allen, 2004), particularly in female actors. To evaluate this, we plot histograms of age for male and female characters for each of the racial categories in Figure 1. The distribution of age for each category appears approximately normal, except for the nativeamerican and pacificislander character groups which have a small sample size. For most categories of race, the mode of the distribution for female actors appears to be at least five years less than the mode for male actors. To check for significance in this difference we conduct Mann-Whitney U tests on male and female age groups for each race with the resulting p-values shown in the figure. We ignore characters belonging to the pacificislander racial group since there are no female actors from this race in our corpus. The difference in age groups is significant in most categories with large sample sizes, suggesting possible preferences towards casting younger people when casting female actors. 5.2 Character portrayal using language To analyze differences in portrayal of subgroups, we compute psycholinguistic normatives and LIWC metrics as described before. For each of the metrics listed in Table 2, we conduct nonm (4894) f (2008) p age of acq. −0.1590 −0.1715 < 10−5 arousal 0.0253 0.0246 0.41 gender −0.0312 −0.0055 < 10−5 valence 0.2284 0.2421 < 10−5 sex 0.00015 0.0000 0.08 achieve 0.0087 0.0080 < 10−5 religion 0.0025 0.0022 0.10 death 0.0025 0.0016 < 10−5 swear 0.0037 0.0015 < 10−5 Table 6: Median values for male and female characters along with p values obtained by comparing the two groups using Mann-Whitney U test; highlighted differences are significant at α = 0.05 parametric hypothesis tests to look for differences in samples from the subgroups. We treat the different metrics independently, performing statistical tests along each separately. We avoid statistical tests combining two or more factors since some of the resulting groups would be empty due to the skewed group sizes along race. We defer such analyses to future work. 5.2.1 Gender We perform Mann-Whitney U tests between male and female characters along the nine dimensions and the results are shown in Table 6. In all of 1674 the cases, higher values imply higher degree of the corresponding dimension, except for valence in which higher values imply positive valence (attractiveness) and lower values imply negative valence (averseness). The difference between male and female characters are statistically significant along six of the nine dimensions. The results indicate slightly higher age of acquisition scores for male characters. Regarding gender ladenness, male characters appear to be closer to the masculine side than female characters on average, agreeing with previous results. Our results also indicate that female character utterances tend to be more positive in valence compared to male characters while male characters seem to have higher percentage of words related to achievement. In addition, male characters appear to be more frequent in using words related to death as well as swear words compared to female characters. 5.2.2 Race To study differences in portrayal of the racial categories, we perform Kruskal-Wallis test (a generalization of Mann-Whitney U test for more than two groups) on each of the nine metrics with race as the independent variable. We found significant differences in distribution of samples for gender ladenness, sexuality, religion and swear words. For gender ladenness, caucasian and mixed race characters have significantly higher medians than african and nativeamerican characters. In sexuality, latino and mixed race characters were found to have higher median than at least one other racial group with significance indicating a higher degree of sexualization in these characters. Eastasian characters were found to be significantly lower than medians of three other races (caucasian, african and mixed) in using words with religious connotations. In swear word usage, the only significant difference found is between caucasian and african characters with african characters using higher percentage of swear words. In all of the above cases, significance was tested at α = 0.05. 5.2.3 Age To examine the relationship between age and the different metrics, we build separate linear regression models with each dimension as the dependent variable and character age as the independent variable. Table 7 reports regression coefficients for age along with p values for each dimension. The β1(×10−3) p-value age of acq. 3.9 < 10−10 arousal -1.1 < 10−10 gender -2.5 < 10−10 valence 0.078 0.7 sex -0.25 < 10−5 achieve 0.26 < 10−10 religion 0.12 0.001 death −0.039 0.2 swear -0.34 < 10−5 Table 7: Coefficients of age for linear regression models along each dimension along with p-values; highlighted cells are significant at α = 0.05 positive coefficient for age of acquisition indicates an increase in sophistication of word usage with age. Arousal, on the other hand, has a significant negative coefficient indicating a decrease in activation, on average, as character age increases. Gender ladenness also has a significant negative coefficient indicating that as age increases, the average gender ladenness value decreases. Similar trends are observed for sexuality and swear word usage. Usage of words related to achievement and religion however, seem to increase with age. 5.3 Character network analytics To study differences in major roles assigned to the different subgroups, we compute two centrality metrics from the character network graph constructed for each movie: degree centrality measures the number of unique characters that interact with a given character, betweenness centrality measures how much would the plot be disrupted if said character was to disappear completely, i.e., how important is a character to the overall plot. Similar to the language analyses from previous section, we test differences in these metrics along the three factors of gender, race and age. All statistical tests reported below are conducted at α = 0.05. 5.3.1 Gender Male characters were found to have higher values in the two metrics compared to female characters but the differences were not statistically significant. Motivated by studies (Sapolsky et al., 2003; Linz et al., 1984) which report interactions between genre and gender, we performed MannWhitney U tests between male and female char1675 acters given different genres. To avoid type I errors we corrected for multiple comparisons using the Holm-Bonferroni correction. Significant differences were found only in horror movies where the median degree centrality for females (0.221) was higher than the median degree centrality of males (0.166). This is in agreement with prior studies which report female characters to have a more prominent presence in horror movies, particularly as victims of violent scenes (Welsh and Brantford, 2009). 5.3.2 Race To examine differences in major roles across the racial categories, we perform Kruskal-Wallis tests similar to previous subsection. Significant differences were found with both degree and betweenness centrality measures (p < 0.001; α = 0.05). Latino characters were found to have significantly lower degree centralities compared to caucasian and southasian races suggesting noncentral roles in these characters. Caucasian characters were found to have median betweenness centralities significantly higher than at least one other race. Characters from the nativeamerican race exhibit significantly lower medians in both degree and betweenness centralities than caucasian, african and mixed characters, which agrees with (Rosenthal, 2012). 5.3.3 Age We investigate the effects of age on importance of character roles by building a linear regression model on the two centralities with age as the independent variable. In both cases, age was found to be significant (p < 0.001; α = 0.05). With degree centrality, the regression coefficient β was found to be equal to 0.003. In betweenness centrality, the regression coefficient was also positive, given by β = 8.41×10−4. Both these metrics indicate a positive correlation for character importance with age, i.e. as characters age, there is an increased interaction with other characters in the movie as well as higher prominence in the movie plot. 6 Conclusion We present a scalable automated analyses of differences in character portrayal along multiple factors such as gender, race and age using word usage, psycholinguistic and graph theoretic measures. Several interesting patterns are revealed in the analysis. In particular, movies with female writers and directors in the production team are observed to have balanced gender ratios in characters compared to male writers/directors. Across several races, female actors are found to be younger than male actors on average. Female characters appear to be more positive in language use with fewer references to death and fewer swear words compared to male characters. Female characters also appear to be more prominent in horror movies compared to male characters. Latino and mixed race characters appear to have higher usage of sexual words. Eastasian characters seem to use significantly fewer religious words. As characters aged, their word sophistication seems to increase along with usage of words related to achievement and religion; there was also a significant reduction in word activation, usage of sexual and swear words as character age increases. Future work includes expanding the analyses to non-English movies and combining the linguistic metrics with character networks. Specifically, character network edges can be weighted using the psycholinguistic metrics to analyze the emotional patterns in inter-character interactions. 7 Acknowledgments We acknowledge support from NSF and our partnership with Google and the Geena Davis Institute on Gender in Media. We thank Naveen Kumar for all the helpful discussions and feedback during this work. References Ricardo Alberich, Joe Miro-Julia, and Francesc Rossell´o. 2002. Marvel universe looks almost like a real social network. arXiv preprint condmat/0202174 . Elizabeth Behm-Morawitz and Dana E Mastro. 2008. Mean girls? the influence of gender portrayals in teen movies on emerging adults’ gender-based attitudes and beliefs. Journalism & Mass Communication Quarterly 85(1):131–146. Harry M Benshoff and Sean Griffin. 2011. America on film: Representing race, class, gender, and sexuality at the movies. John Wiley & Sons. Andrew Beveridge and Jie Shan. 2016. Network of thrones. Math Horizons 23(4):18–22. Anthony Bonato, David Ryan D’Angelo, Ethan R Elenberg, David F Gleich, and Yangyang Hou. 2016. 1676 Mining and modeling character networks. In Algorithms and Models for the Web Graph: 13th International Workshop, WAW 2016, Montreal, QC, Canada, December 14–15, 2016, Proceedings 13. Springer, pages 100–114. Gavin S Cape. 2003. Addiction, stigma and movies. Acta Psychiatrica Scandinavica 107(3):163–169. James M Clark and Allan Paivio. 2004. Extensions of the paivio, yuille, and madigan (1968) norms. Behavior Research Methods, Instruments, & Computers 36(3):371–383. William Cohen, Pradeep Ravikumar, and Stephen Fienberg. 2003. A comparison of string metrics for matching names and records. In Kdd workshop on data cleaning and object consolidation. volume 3, pages 73–78. DailyScript. 2017. The daily script. [Online; accessed 1-February-2017]. http://dailyscript.com/. Paul G Davies, Steven J Spencer, and Claude M Steele. 2005. Clearing the air: identity safety moderates the effects of stereotype threat on women’s leadership aspirations. Journal of personality and social psychology 88(2):276. Tony Dimnik and Sandra Felton. 2006. Accountant stereotypes in movies distributed in north america in the twentieth century. Accounting, Organizations and Society 31(2):129–155. Alice H Eagly and Steven J Karau. 2002. Role congruity theory of prejudice toward female leaders. Psychological review 109(3):573. ethnicelebs.com. 2017. Celebrity ethnicity. [Online; accessed 1-February-2017]. http://ethnicelebs.com. Louis August Gottschalk and Goldine C Gleser. 1969. The measurement of psychological states through the content analysis of verbal behavior. Univ of California Press. Mark Hedley. 1994. The presentation of gendered conflict in popular movies: Affective stereotypes, cultural sentiments, and men’s motivation. Sex Roles 31(11-12):721–740. Bell Hooks. 2009. Reel to real: race, class and sex at the movies. Routledge. IMDb. 2017. Internet movie database. [Online; accessed 1-February-2017]. http://www.imdb.com/. IMSDb. 2017. Internet movie script database. [Online; accessed 1-February-2017]. http://www.imsdb.com/. Anne E Lincoln and Michael Patrick Allen. 2004. Double jeopardy in hollywood: Age and gender in the careers of film actors, 1926–1999. In Sociological Forum. Springer, volume 19, pages 611–631. Daniel Linz, Edward Donnerstein, and Steven Penrod. 1984. The effects of multiple exposures to filmed violence against women. Journal of Communication 34(3):130–147. Ting Liu, Kit Cho, George Aaron Broadwell, Samira Shaikh, Tomek Strzalkowski, John Lien, Sarah M Taylor, Laurie Feldman, Boris Yamrom, Nick Webb, et al. 2014. Automatic expansion of the mrc psycholinguistic database imageability ratings. In LREC. pages 2800–2805. Franc¸ois Mairesse, Marilyn A Walker, Matthias R Mehl, and Roger K Moore. 2007. Using linguistic cues for the automatic recognition of personality in conversation and text. Journal of artificial intelligence research 30:457–500. Nikolaos Malandrakis and Shrikanth S Narayanan. 2015. Therapy language analysis using automatically generated psycholinguistic norms. In INTERSPEECH. pages 1952–1956. Finn ˚Arup Nielsen. 2011. A new anew: Evaluation of a word list for sentiment analysis in microblogs. arXiv preprint arXiv:1103.2903 . David Niven. 2006. Throwing your hat out of the ring: Negative recruitment and the gender imbalance in state legislative candidacy. Politics & Gender 2(04):473–489. NYFA. 2013. Gender inequality in film. [Online; accessed 1-February-2017]. https://www.nyfa.edu/film-school-blog/genderinequality-in-film/. James W Pennebaker, Ryan L Boyd, Kayla Jordan, and Kate Blackburn. 2015. The development and psychometric properties of liwc2015. Technical report. James W Pennebaker, Matthias R Mehl, and Kate G Niederhoffer. 2003. Psychological aspects of natural language use: Our words, our selves. Annual review of psychology 54(1):547–577. Polygraph. 2016. Film dialogue from 2,000 screenplays, broken down by gender and age. [Online; accessed 1-February-2017]. http://polygraph.cool/films/. Anil Ramakrishna, Nikolaos Malandrakis, Elizabeth Staruk, and Shrikanth S Narayanan. 2015. A quantitative analysis of gender differences in movies using psycholinguistic normatives. In EMNLP. pages 1996–2001. Mauricio Aparecido Ribeiro, Roberto Antonio Vosgerau, Maria Larissa Pereira Andruchiw, and Sandro Ely de Souza Pinto. 2016. The complex social network of the lord of rings. Revista Brasileira de Ensino de F´ısica 38(1). Nicolas G Rosenthal. 2012. Reimagining Indian country: native American migration and identity in twentieth-century Los Angeles. Univ of North Carolina Press. 1677 Burry S Sapolsky, Fred Molitor, and Sarah Luque. 2003. Sex and violence in slasher films: Reexamining the assumptions. Journalism & Mass Communication Quarterly 80(1):28–38. Stacy L Smith, Marc Choueiti, and Katherine Pieper. 2014. Gender bias without borders: An investigation of female characters in popular films across 11 countries. USC Annenberg 5. Shinya Tanaka, Adam Jatowt, Makoto P Kato, and Katsumi Tanaka. 2013. Estimating content concreteness for finding comprehensible documents. In Proceedings of the sixth ACM international conference on Web search and data mining. ACM, pages 475– 484. Tom FM Ter Bogt, Rutger CME Engels, Sanne Bogers, and Monique Kloosterman. 2010. “shake it baby, shake it”: Media preferences, sexual attitudes and gender stereotypes among adolescents. Sex Roles 63(11-12):844–859. Danny Wedding and Mary Ann Boyd. 1999. Movies & mental illness: Using films to understand psychopathology. . Andrew Welsh and Laurier Brantford. 2009. Sex and violence in the slasher horror film: A content analysis of gender differences in the depiction of violence. Journal of Criminal Justice and Popular Culture 16(1):1–25. Bo Xiao, Zac E Imel, Panayiotis G Georgiou, David C Atkins, and Shrikanth S Narayanan. 2015. ”rate my therapist”: Automated detection of empathy in drug and alcohol counseling via speech and language processing. PloS one 10(12):e0143055. 1678
2017
153
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1679–1689 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1154 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1679–1689 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1154 Linguistically Regularized LSTM for Sentiment Classification Qiao Qian1, Minlie Huang1∗, Jinhao Lei2, Xiaoyan Zhu1 1State Key Laboratory of Intelligent Technology and Systems Tsinghua National Laboratory for Information Science and Technology Dept. of Computer Science and Technology, Tsinghua University, Beijing 100084, PR China 2Dept. of Thermal Engineering, Tsinghua University, Beijing 100084, PR China [email protected], [email protected] [email protected] , [email protected] Abstract This paper deals with sentence-level sentiment classification. Though a variety of neural network models have been proposed recently, however, previous models either depend on expensive phrase-level annotation, most of which has remarkably degraded performance when trained with only sentence-level annotation; or do not fully employ linguistic resources (e.g., sentiment lexicons, negation words, intensity words). In this paper, we propose simple models trained with sentence-level annotation, but also attempt to model the linguistic role of sentiment lexicons, negation words, and intensity words. Results show that our models are able to capture the linguistic role of sentiment words, negation words, and intensity words in sentiment expression. 1 Introduction Sentiment classification aims to classify text to sentiment classes such as positive or negative, or more fine-grained classes such as very positive, positive, neutral, etc. There has been a variety of approaches for this purpose such as lexicon-based classification (Turney, 2002; Taboada et al., 2011), and early machine learning based methods (Pang et al., 2002; Pang and Lee, 2005), and recently neural network models such as convolutional neural network (CNN) (Kim, 2014; Kalchbrenner et al., 2014; Lei et al., 2015), recursive autoencoders (Socher et al., 2011, 2013), Long ShortTerm Memory (LSTM) (Mikolov, 2012; Chung et al., 2014; Tai et al., 2015; Zhu et al., 2015), and many more. ∗Corresponding Author: Minlie Huang In spite of the great success of these neural models, there are some defects in previous studies. First, tree-structured models such as recursive autoencoders and Tree-LSTM (Tai et al., 2015; Zhu et al., 2015), depend on parsing tree structures and expensive phrase-level annotation, whose performance drops substantially when only trained with sentence-level annotation. Second, linguistic knowledge such as sentiment lexicon, negation words or negators (e.g., not, never), and intensity words or intensifiers (e.g., very, absolutely), has not been fully employed in neural models. The goal of this research is to developing simple sequence models but also attempts to fully employing linguistic resources to benefit sentiment classification. Firstly, we attempts to develop simple models that do not depend on parsing trees and do not require phrase-level annotation which is too expensive in real-world applications. Secondly, in order to obtain competitive performance, simple models can benefit from linguistic resources. Three types of resources will be addressed in this paper: sentiment lexicon, negation words, and intensity words. Sentiment lexicon offers the prior polarity of a word which can be useful in determining the sentiment polarity of longer texts such as phrases and sentences. Negators are typical sentiment shifters (Zhu et al., 2014), which constantly change the polarity of sentiment expression. Intensifiers change the valence degree of the modified text, which is important for fine-grained sentiment classification. In order to model the linguistic role of sentiment, negation, and intensity words, our central idea is to regularize the difference between the predicted sentiment distribution of the current position 1, and that of the previous or next positions, in a sequence model. For instance, if the cur1Note that in sequence models, the hidden state of the current position also encodes forward or backward contexts. 1679 rent position is a negator not, the negator should change the sentiment distribution of the next position accordingly. To summarize, our contributions lie in two folds: • We discover that modeling the linguistic role of sentiment, negation, and intensity words can enhance sentence-level sentiment classification. We address the issue by imposing linguistic-inspired regularizers on sequence LSTM models. • Unlike previous models that depend on parsing structures and expensive phrase-level annotation, our models are simple and efficient, but the performance is on a par with the stateof-the-art. The rest of the paper is organized as follows: In the following section, we survey related work. In Section 3, we briefly introduce the background of LSTM and bidirectional LSTM, and then describe in detail the lingistic regularizers for sentiment/negation/intensity words in Section 4. Experiments are presented in Section 5, and Conclusion follows in Section 6. 2 Related Work 2.1 Neural Networks for Sentiment Classification There are many neural networks proposed for sentiment classification. The most noticeable models may be the recursive autoencoder neural network which builds the representation of a sentence from subphrases recursively (Socher et al., 2011, 2013; Dong et al., 2014; Qian et al., 2015). Such recursive models usually depend on a tree structure of input text, and in order to obtain competitive results, usually require annotation of all subphrases. Sequence models, for instance, convolutional neural network (CNN), do not require tree-structured data, which are widely adopted for sentiment classification (Kim, 2014; Kalchbrenner et al., 2014; Lei et al., 2015). Long short-term memory models are also common for learning sentence-level representation due to its capability of modeling the prefix or suffix context (Hochreiter and Schmidhuber, 1997). LSTM can be commonly applied to sequential data but also tree-structured data (Zhu et al., 2015; Tai et al., 2015). 2.2 Applying Linguistic Knowledge for Sentiment Classification Linguistic knowledge and sentiment resources, such as sentiment lexicons, negation words (not, never, neither, etc.) or negators, and intensity words (very, extremely, etc.) or intensifiers, are useful for sentiment analysis in general. Sentiment lexicon (Hu and Liu, 2004; Wilson et al., 2005) usually defines prior polarity of a lexical entry, and is valuable for lexicon-based models (Turney, 2002; Taboada et al., 2011), and machine learning approaches (Pang and Lee, 2008). There are recent works for automatic construction of sentiment lexicons from social data (Vo and Zhang, 2016) and for multiple languages (Chen and Skiena, 2014). A noticeable work that ultilizes sentiment lexicons can be seen in (Teng et al., 2016) which treats the sentiment score of a sentence as a weighted sum of prior sentiment scores of negation words and sentiment words, where the weights are learned by a neural network. Negation words play a critical role in modifying sentiment of textual expressions. Some early negation models adopt the reversing assumption that a negator reverses the sign of the sentiment value of the modified text (Polanyi and Zaenen, 2006; Kennedy and Inkpen, 2006). The shifting hyothesis assumes that negators change the sentiment values by a constant amount (Taboada et al., 2011; Liu and Seneff, 2009). Since each negator can affect the modified text in different ways, the constant amount can be extended to be negatorspecific (Zhu et al., 2014), and further, the effect of negators could also depend on the syntax and semantics of the modified text (Zhu et al., 2014). Other approaches to negation modeling can be seen in (Jia et al., 2009; Wiegand et al., 2010; Benamara et al., 2012; Lapponi et al., 2012). Sentiment intensity of a phrase indicates the strength of associated sentiment, which is quite important for fine-grained sentiment classification or rating. Intensity words can change the valence degree (i.e., sentiment intensity) of the modified text. In (Wei et al., 2011) the authors propose a linear regression model to predict the valence value for content words. In (Malandrakis et al., 2013), a kernel-based model is proposed to combine semantic information for predicting sentiment score. In the SemEval-2016 task 7 subtask A, a learningto-rank model with a pair-wise strategy is proposed to predict sentiment intensity scores (Wang 1680 et al., 2016). Linguistic intensity is not limited to sentiment or intensity words, and there are works that assign low/medium/high intensity scales to adjectives such as okay, good, great (Sharma et al., 2015) or to gradable terms (e.g. large, huge, gigantic) (Shivade et al., 2015). In (Dong et al., 2015), a sentiment parser is proposed, and the authors studied how sentiment changes when a phrase is modified by negators or intensifiers. Applying linguistic regularization to text classification can be seen in (Yogatama and Smith, 2014) which introduces three linguistically motivated structured regularizers based on parse trees, topics, and hierarchical word clusters for text categorization. Our work differs in that (Yogatama and Smith, 2014) applies group lasso regularizers to logistic regression on model parameters while our regularizers are applied on intermediate outputs with KL divergence. 3 Long Short-term Memory Network 3.1 Long Short-Term Memory (LSTM) Long Short-Term Memory has been widely adopted for text processing. Briefly speaking, in LSTM, the hidden states ht and memory cell ct is a function of their previous ct−1 and ht−1 and input vector xt, or formally as follows: ct, ht = g(LSTM)(ct−1, ht−1, xt) (1) The hidden state ht ∈Rd denotes the representation of position t while also encoding the preceding contexts of the position. For more details about LSTM, we refer readers to (Hochreiter and Schmidhuber, 1997). 3.2 Bidirectional LSTM In LSTM, the hidden state of each position (ht) only encodes the prefix context in a forward direction while the backward context is not considered. Bidirectional LSTM (Graves et al., 2013) exploited two parallel passes (forward and backward) and concatenated hidden states of the two LSTMs as the representation of each position. The forward and backward LSTMs are respectively formulated as follows: −→c t, −→h t = g(LSTM)(−→c t−1, −→h t−1, xt) (2) ←−c t, ←−h t = g(LSTM)(←−c t+1, ←−h t+1, xt) (3) where g(LSTM) is the same as that in Eq (1). Particularly, parameters in the two LSTMs are shared. The representation of the entire sentence is [−→h n, ←−h 1], where n is the length of the sentence. At each position t, the new representation is ht = [−→h t, ←−h t], which is the concatenation of hidden states of the forward LSTM and backward LSTM. In this way, the forward and backward contexts can be considered simultaneously. 4 Linguistically Regularized LSTM Figure 1: The overview of Linguistically Regularized LSTM. Note that we apply a backward LSTM (from right to left) to encode sentence since most negators and intensifiers are modifying their following words. The central idea of the paper is to model the linguistic role of sentiment, negation, and intensity words in sentence-level sentiment classification by regularizing the outputs at adjacent positions of a sentence. For example in Fig 1, in sentence “It’s not an interesting movie”, the predicted sentiment distributions at “*an interesting movie2” and “*interesting movie” should be close to each other, while the predicted sentiment distribution at “*interesting movie” should be quite different from the preceding positions (in the backward direction) (“*movie”) since a sentiment word (“interesting”) is seen. We propose a generic regularizer and three special regularizers based on the following linguistic observations: • Non-Sentiment Regularizer: if the two adjacent positions are all non-opinion words, the sentiment distributions of the two positions should be close to each other. Though 2The asterisk denotes the current position. 1681 this is not always true (e.g., soap movie), this assumption holds at most cases. • Sentiment Regularizer: if the word is a sentiment word found in a lexicon, the sentiment distribution of the current position should be significantly different from that of the next or previous positions. We approach this phenomenon with a sentiment class specific shifting distribution. • Negation Regularizer: Negation words such as “not” and “never” are critical sentiment shifter or converter: in general they shift sentiment polarity from the positive end to the negative end, but sometimes depend on the negation word and the words they modify. The negation regularizer models this linguistic phenomena with a negator-specific transformation matrix. • Intensity Regularizer: Intensity words such as “very” and “extremely” change the valence degree of a sentiment expression: for instance, from positive to very positive. Modeling this effect is quite important for finegrained sentiment classification, and the intensity regularizer is designed to formulate this effect by a word-specific transformation matrix. More formally, the predicted sentiment distribution (pt, based on ht, see Eq. 5) at position t should be linguistically regularized with respect to that of the preceding (t −1) or following (t + 1) positions. In order to enforce the model to produce coherent predictions, we plug a new loss term into the original cross entropy loss: L(θ) = − ∑ i ˆyi log yi + α ∑ i ∑ t Lt,i + β||θ||2 (4) where ˆyi is the gold distribution for sentence i, yi is the predicted distribution, Lt,i is one of the above regularizers or combination of these regularizers on sentence i, α is the weight for the regularization term, and t is the word position in a sentence. Note that we do not consider the modification span of negation and intensity words to preserve the simplicity of the proposed models. Negation scope resolution is another complex problem which has been extensively studied (Zou et al., 2013; Packard et al., 2014; Fancellu et al., 2016), which is beyond the scope of this work. Instead, we resort to sequence LSTMs for encoding surrounding contexts at a given position. 4.1 Non-Sentiment Regularizer (NSR) This regularizer constrains that the sentiment distributions of adjacent positions should not vary much if the additional input word xt is not a sentiment word, formally as follows: L(NSR) t = max(0, DKL(pt||pt−1) −M) (5) where M is a hyperparameter for margin, pt is the predicted distribution at state of position t, (i.e., ht), and DKL(p||q) is a symmetric KL divergence defined as follows: DKL(p||q) = 1 2 C ∑ l=1 p(l) log q(l) + q(l) log p(l) (6) where p, q are distributions over sentiment labels l and C is the number of labels. 4.2 Sentiment Regularizer (SR) The sentiment regularizer constrains that the sentiment distributions of adjacent positions should drift accordingly if the input word is a sentiment word. Let’s revisit the example “It’s not an interesting movie” again. At position t = 2 (in the backward direction) we see a positive word “interesting” so the predicted distribution would be more positive than that at position t = 1 (movie). This is the issue of sentiment drift. In order to address the sentiment drift issue, we propose a polarity shifting distribution sc ∈RC for each sentiment class defined in a lexicon. For instance, a sentiment lexicon may have class labels like strong positive, weakly positive, weakly negative, and strong negative, and for each class, there is a shifting distribution which will be learned by the model. The sentiment regularizer states that if the current word is a sentiment word, the sentiment distribution drift should be observed in comparison to the previous position, in more details: p(SR) t−1 = pt−1 + sc(xt) (7) L(SR) t = max(0, DKL(pt||p(SR) t−1 ) −M) (8) where p(SR) t−1 is the drifted sentiment distribution after considering the shifting sentiment distribution corresponding to the state at position t, c(xt) 1682 is the prior sentiment class of word xt, and sc ∈θ is a parameter to be optimized but could also be set fixed with prior knowledge. Note that in this way all words of the same sentiment class share the same drifting distribution, but in a refined setting, we can learn a shifting distribution for each sentiment word if large-scale datasets are available. 4.3 Negation Regularizer (NR) The negation regularizer approaches how negation words shift the sentiment distribution of the modified text. When the input xt is a negation word, the sentiment distribution should be shifted/reversed accordingly. However, the negation role is more complex than that by sentiment words, for example, the word “not” in “not good” and “not bad” have different roles in polarity change. The former changes the polarity to negative, while the latter changes to neutral instead of positive. To respect such complex negation effects, we propose a transformation matrix Tm ∈RC×C for each negation word m, and the matrix will be learned by the model. The regularizer assumes that if the current position is a negation word, the sentiment distribution of the current position should be close to that of the next or previous position with the transformation. p(NR) t−1 = softmax(Txj × pt−1) (9) p(NR) t+1 = softmax(Txj × pt+1) (10) L(NR) t = min { max(0, DKL(pt||p(NR) t−1 ) −M) max(0, DKL(pt||p(NR) t+1 ) −M) (11) where p(NR) t−1 and p(NR) t+1 is the sentiment distuibution after transformation, Txj ∈θ is the transformation matrix for a negation word xj, a parameter to be learned during training. In total, we train m transformation matrixs for m negation words. Such negator-specific transformation is in accordance with the finding that each negator has its individual negation effect (Zhu et al., 2014). 4.4 Intensity Regularizer (IR) Sentiment intensity of a phrase indicates the strength of associated sentiment, which is quite important for fine-grained sentiment classification or rating. Intensifier can change the valence degree of the content word. The intensity regularizer models how intensity words influence the sentiment valence of a phrase or a sentence. The formulation of the intensity effect is quite the same as that in the negation regularizer, but with different parameters of course. For each intensity word, there is a transform matrix to favor the different roles of various intensifiers on sentiment drift. For brevity, we will not repeat the formulas here. 4.5 Applying Linguistic Regularizers to Bidirectional LSTM To preserve the simplicity of our proposals, we do not consider the modification span of negation and intensity words, which is a quite challenging problem in the NLP community (Zou et al., 2013; Packard et al., 2014; Fancellu et al., 2016). However, we can alleviate the problem by leveraging bidirectional LSTM. For a single LSTM, we employ a backward LSTM from the end to the beginning of a sentence. This is because, at most times, the modified words of negation and intensity words are usually at the right side of the modified text. But sometimes, the modified words are at the left side of negation and intensity words. To better address this issue, we employ bidirectional LSTM and let the model determine which side should be chosen. More formally, in Bi-LSTM, we compute a transformed sentiment distribution on −→p t−1 of the forward LSTM and also that on ←−p t+1 of the backward LSTM, and compute the minimum distance of the distribution of the current position to the two distributions. This could be formulated as follows: −→p (R) t−1 = softmax(Txj × −→p t−1) (12) ←−p (R) t+1 = softmax(Txj × ←−p t+1) (13) L(R) t = min { max(0, DKL(−→p t||−→p (R) t−1) −M) max(0, DKL(←−p t||←−p (R) t+1) −M) (14) where −→p (R) t−1 and ←−p (R) t+1 are the sentiment distributions transformed from the previous distribution −→p t−1 and next distribution ←−p t+1 respectively. Note that R ∈{NR, IR} indicating the formulation works for both negation and intensity regularizers. 1683 Due to the same consideration, we redefine L(NSR) t and L(SR) t with bidirectional LSTM similarly. The formulation is the same and omitted for brevity. 4.6 Discussion Our models address these linguistic factors with mathematical operations, parameterized with shifting distribution vectors or transformation matrices. In the sentiment regularizer, the sentiment shifting effect is parameterized with a classspecific distribution (but could also be wordspecific if with more data). In the negation and intensity regularizers, the effect is parameterized with word-specific transformation matrices. This is to respect the fact that the mechanism of how negation and intensity words shift sentiment expression is quite complex and highly dependent on individual words. Negation/Intensity effect also depends on the syntax and semantics of the modified text, however, for simplicity we resort to sequence LSTM for encoding surrounding contexts in this paper. We partially address the modification scope issue by applying the minimization operator in Eq. 11 and Eq. 14, and the bidirectional LSTM. 5 Experiment 5.1 Dataset and Sentiment Lexicon Two datasets are used for evaluating the proposed models: Movie Review (MR) (Pang and Lee, 2005) where each sentence is annotated with two classes as negative, positive and Stanford Sentiment Treebank (SST) (Socher et al., 2013) with five classes { very negative, negative, neutral, positive, very positive}. Note that SST has provided phrase-level annotation on all inner nodes, but we only use the sentence-level annotation since one of our goals is to avoid expensive phrase-level annotation. The sentiment lexicon contains two parts. The first part comes from MPQA (Wilson et al., 2005), which contains 5, 153 sentiment words, each with polarity rating. The second part consists of the leaf nodes of the SST dataset (i.e., all sentiment words) and there are 6, 886 polar words except neural ones. We combine the two parts and ignore those words that have conflicting sentiment labels, and produce a lexicon of 9, 750 words with 4 sentiment labels. For negation and intensity words, we collect them manually since the number is small, some of which can be seen in Table 2. Dataset MR SST # sentences in total 10,662 11,885 #sen containing sentiment word 10,446 11,211 #sen containing negation word 1,644 1,832 #sen containing intensity word 2,687 2,472 Table 1: The data statistics. 5.2 The Details of Experiment Setting In order to let others reproduce our results, we present all the details of our models. We adopt Glove vectors (Pennington et al., 2014) as the initial setting of word embeddings V . The shifting vector for each sentiment class (sc), and the transformation matrices for negation and intensity (Tm) are initialized with a prior value. The other parameters for hidden layers (W (∗), U (∗), S) are initialized with Uniform(0, 1/sqrt(d)), where d is the dimension of hidden representation, and we set d=300. We adopt adaGrad to train the models, and the learning rate is 0.1. It’s worth noting that, we adopt stochastic gradient descent to update the word embeddings (V ), with a learning rate of 0.2 but without momentum. The optimal setting for α and β is 0.5 and 0.0001 respectively. During training, we adopt the dropout operation before the softmax layer, with a probability of 0.5. Mini-batch is taken to train the models, each batch containing 25 samples. After training with 3,000 mini-batch (about 9 epochs on MR and 10 epochs on SST), we choose the results of the model that performs best on the validation dataset as the final performance. Negation word no, nothing, never, neither, not, seldom, scarcely, etc. Intensity word terribly, greatly, absolutely, too, very, completely, etc. Table 2: Examples of negation and intensity words. 5.3 Overall Comparison We include several baselines, as listed below: RNN/RNTN: Recursive Neural Network over parsing trees, proposed by (Socher et al., 2011) and Recursive Tensor Neural Network (Socher et al., 2013) employs tensors to model correlations between different dimensions of child nodes’ vectors. LSTM/Bi-LSTM: Long Short-Term Memory 1684 (Cho et al., 2014) and the bidirectional variant as introduced previously. Tree-LSTM: Tree-Structured Long Short-Term Memory (Tai et al., 2015) introduces memory cells and gates into tree-structured neural network. CNN: Convolutional Neural Network (Kalchbrenner et al., 2014) generates sentence representation by convolution and pooling operations. CNN-Tensor: In (Lei et al., 2015), the convolution operation is replaced by tensor product and a dynamic programming is applied to enumerate all skippable trigrams in a sentence. Very strong results are reported. DAN: Deep Average Network (DAN) (Iyyer et al., 2015) averages all word vectors in a sentence and connects an MLP layer to the output layer. Neural Context-Sensitive Lexicon: NCSL (Teng et al., 2016) treats the sentiment score of a sentence as a weighted sum of prior scores of words in the sentence where the weights are learned by a neural network. Method MR SST Phrase-level SST Sent.-level RNN 77.7* 44.8# 43.2* RNTN 75.9# 45.7* 43.4# LSTM 77.4# 46.4* 45.6# Bi-LSTM 79.3# 49.1* 46.5# Tree-LSTM 80.7# 51.0* 48.1# CNN 81.5* 48.0* 46.9# CNN-Tensor 51.2* 50.6* DAN 47.7* NCSL 82.9 51.1* 47.1# LR-Bi-LSTM 82.1 50.6 48.6 LR-LSTM 81.5 50.2 48.2 Table 3: The accuracy on MR and SST. Phraselevel means the models use phrase-level annotation for training. And Sent.-level means the models only use sentence-level annotation. Results marked with * are re-printed from the references, while those with # are obtained either by our own implementation or with the same codes shared by the original authors. Firstly, we evaluate our model on the MR dataset and the results are shown in Table 3. We have the following observations: First, both LR-LSTM and LR-Bi-LSTM outperforms their counterparts (81.5% vs. 77.4% and 82.1% vs. 79.3%, resp.), demonstrating the effectiveness of the linguistic regularizers. Second, LR-LSTM and LR-Bi-LSTM perform slightly better than Tree-LSTM but Tree-LSTM leverages a constituency tree structure while our model is a simple sequence model. As future work, we will apply such regularizers to tree-structured models. Last, on the MR dataset, our model is comparable to or slightly better than CNN. For fine-grained sentiment classification, we evaluate our model on the SST dataset which has five sentiment classes { very negative, negative, neutral, positive, very positive} so that we can evaluate the sentiment shifting effect of intensity words. The results are shown in Table 3. We have the following observations: First, linguistically regularized LSTM and BiLSTM are better than their counterparts. It’s worth noting that LR-Bi-LSTM (trained with just sentence-level annotation) is even comparable to Bi-LSTM trained with phrase-level annotation. That means, LR-Bi-LSTM can avoid the heavy phrase-level annotation but still obtain comparable results. Second, our models are comparable to TreeLSTM but our models are not dependent on a parsing tree and more simple, and hence more efficient. Further, for Tree-LSTM, the model is heavily dependent on phrase-level annotation, otherwise the performance drops substantially (from 51% to 48.1%). Last, on the SST dataset, our model is better than CNN, DAN, and NCSL. We conjecture that the strong performance of CNN-Tensor may be due to the tensor product operation, the enumeration of all skippable trigrams, and the concatenated representations of all pooling layers for final classification. 5.4 The Effect of Different Regularizers In order to reveal the effect of each individual regularizer, we conduct ablation experiments. Each time, we remove a regularizer and observe how the performance varies. First of all, we conduct this experiment on the entire datasets, and then we experiment on sub-datasets that only contain negation words or intensity words. The experiment results are shown in Table 4 where we can see that the non-sentiment regularizer (NSR) and sentiment regularizer (SR) play a key role3, and the negation regularizer and in3Kindly note that almost all sentences contain sentiment 1685 Method MR SST LR-Bi-LSTM 82.1 48.6 LR-Bi-LSTM (-NSR) 80.8 46.9 LR-Bi-LSTM (-SR) 80.6 46.9 LR-Bi-LSTM (-NR) 81.2 47.6 LR-Bi-LSTM (-IR) 81.7 47.9 LR-LSTM 81.5 48.2 LR-LSTM (-NSR) 80.2 46.4 LR-LSTM (-SR) 80.2 46.6 LR-LSTM (-NR) 80.8 47.4 LR-LSTM (-IR) 81.2 47.4 Table 4: The accuracy for LR-Bi-LSTM and LRLSTM with regularizer ablation. NSR, SR, NR and IR denotes Non-sentiment Regularizer, Sentiment Regularizer, Negation Regularizer, and Intensity Regularizer respectively. tensity regularizer are effective but less important than NSR and SR. This may be due to the fact that only 14% of sentences contains negation words in the test datasets, and 23% contains intensity words, and thus we further evaluate the models on two subsets, as shown in Table 5. The experiments on the subsets show that: 1) With linguistic regularizers, LR-Bi-LSTM outperforms Bi-LSTM remarkably on these subsets; 2) When the negation regularizer is removed from the model, the performance drops significantly on both MR and SST subsets; 3) Similar observations can be found regarding the intensity regularizer. Method Neg. Sub. Int. Sub. MR SST MR SST BiLSTM 72.0 39.8 83.2 48.8 LR-Bi-LSTM (-NR) 74.2 41.6 LR-Bi-LSTM (-IR) 85.2 50.0 LR-Bi-LSTM 78.5 44.4 87.1 53.2 Table 5: The accuracy on the negation sub-dataset (Neg. Sub.) that only contains negators, and intensity sub-dataset (Int. Sub.) that only contains intensifiers. 5.5 The Effect of the Negation Regularizer To further reveal the linguistic role of negation words, we compare the predicted sentiment distributions of a phrase pair with and without a negation word. The experimental results performed on MR are shown in Fig. 2. Each dot denotes a phrase words, see Tab. 1. pair (for example, <interesting, not interesting>), where the x-axis denotes the positive score4 of a phrase without negators (e.g., interesting), and the y-axis indicates the positive score for the phrase with negators (e.g., not interesting). The curves in the figures show this function: [1 −y, y] = softmax(Tnw ∗[1 −x, x]) where [1 −x, x] is a sentiment distribution on [negative, positive], x is the positive score of the phrase without negators (x-axis) and y that of the phrase with negators (yaxis), and Tnw is the transformation matrix for the negation word nw (see Eq. 9). By looking into the Figure 2: The sentiment shifts with negators. Each dot < x, y > indicates that x is the sentiment score of a phrase without negator and y is that of the phrase with a negator. detailed results of our model, we have the following statements: First, there is no dot at the up-right and bottomleft blocks, indicating that negators generally shift/convert very positive or very negative phrases to other polarities. Typical phrases include not very good, not too bad. Second, the dots at the up-left and bottom-right respectively indicates the negation effects: changing negative to positive and positive to negative. Typical phrases include never seems hopelessly (up-left), no good scenes (bottom-right), not interesting (bottom-right), etc. There are also some positive/negative phrases shifting to neutral sentiment such as not so good, and not too bad. Last, the dots located at the center indicate that neutral phrases maintain neutral sentiment with negators. Typical phrases include not at home, not here, where negators typically modify nonsentiment words. 5.6 The Effect of the Intensity Regularizer To further reveal the linguistic role of intensity words, we perform experiments on the SST dataset, as illustrated in Figure 3. We show the 4 The score is obtained from the predicted distribution, where 1 means positive and 0 means negative. 1686 matrix that indicates how the sentiment shifts after being modified by intensifiers. Each number in a cell (mij) indicates how many phrases are predicted with a sentiment label i but the prediction of the phrases with intensifiers changes to label j. For instance, the number 20 (m21) in the second matrix , means that there are 20 phrases predicted with a class of negative (-) but the prediction changes to very negative (- -) after being modified by intensifier “very”. Results in the first Figure 3: The sentiment shifting with intensifiers. The number in cell(mij) indicates how many phrases are predicted with sentiment label i but the prediction of phrases with intensifiers changes to label j. matrix show that, for intensifier “most”, there are 21/21/13/12 phrases whose sentiment is shifted after being modified by intensifiers, from negative to very negative (eg. most irresponsible picture), positive to very positive (eg. most famous author), neutral to negative (eg. most plain), and neutral to positive (eg. most closely), respectively. There are also many phrases retaining the sentiment after being modified with intensifiers. Not surprisingly, for very positive/negative phrases, phrases modified by intensifiers still maintain the strong sentiment. For the left phrases, they fall into three categories: first, words modified by intensifiers are non-sentiment words, such as most of us, most part; second, intensifiers are not strong enough to shift sentiment, such as most complex (from neg. to neg.), most traditional (from pos. to pos.); third, our models fail to shift sentiment with intensifiers such as most vital, most resonant film. 6 Conclusion and Future Work We present linguistically regularized LSTMs for sentence-level sentiment classification. The proposed models address the sentient shifting effect of sentiment, negation, and intensity words. Furthermore, our models are sequence LSTMs which do not depend on a parsing tree-structure and do not require expensive phrase-level annotation. Results show that our models are able to address the linguistic role of sentiment, negation, and intensity words. To preserve the simplicity of the proposed models, we do not consider the modification scope of negation and intensity words, though we partially address this issue by applying a minimization operartor (see Eq. 11, Eq. 14) and bi-directional LSTM. As future work, we plan to apply the linguistic regularizers to tree-LSTM to address the scope issue since the parsing tree is easier to indicate the modification scope explicitly. Acknowledgments This work was partly supported by the National Basic Research Program (973 Program) under grant No. 2013CB329403, and the National Science Foundation of China under grant No.61272227/61332007. References Farah Benamara, Baptiste Chardon, Yannick Mathieu, Vladimir Popescu, and Nicholas Asher. 2012. How do negation and modality impact on opinions? In Proceedings of the Workshop on ExtraPropositional Aspects of Meaning in Computational Linguistics. pages 10–18. Yanqing Chen and Steven Skiena. 2014. Building sentiment lexicons for all major languages. In ACL. pages 383–389. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 . Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 . Li Dong, Furu Wei, Shujie Liu, Ming Zhou, and Ke Xu. 2015. A statistical parsing framework for sentiment classification. Computational Linguistics . Li Dong, Furu Wei, Ming Zhou, and Ke Xu. 2014. Adaptive multi-compositionality for recursive neural models with applications to sentiment analysis. In AAAI. AAAI. Federico Fancellu, Adam Lopez, and Bonnie Webber. 2016. Neural networks for negation scope detection. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. pages 495– 504. 1687 Alex Graves, Navdeep Jaitly, and Abdel-rahman Mohamed. 2013. Hybrid speech recognition with deep bidirectional lstm. In Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop on. IEEE, pages 273–278. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation 9(8):1735–1780. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pages 168– 177. Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum´e III. 2015. Deep unordered composition rivals syntactic methods for text classification. In Proceedings of the Association for Computational Linguistics. Lifeng Jia, Clement Yu, and Weiyi Meng. 2009. The effect of negation on sentiment analysis and retrieval effectiveness. In Proceedings of the 18th ACM conference on Information and knowledge management. pages 1827–1830. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. In ACL. pages 655–665. Alistair Kennedy and Diana Inkpen. 2006. Sentiment classification of movie reviews using contextual valence shifters. Computational intelligence 22(2):110–125. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In EMNLP. pages 1746– 1751. Emanuele Lapponi, Jonathon Read, and Lilja Øvrelid. 2012. Representing and resolving negation for sentiment analysis. In 2012 IEEE 12th International Conference on Data Mining Workshops. pages 687– 692. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2015. Molding cnns for text: non-linear, non-consecutive convolutions. ACL . Jingjing Liu and Stephanie Seneff. 2009. Review sentiment scoring via a parse-and-paraphrase paradigm. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. pages 161–169. Nikolaos Malandrakis, Alexandros Potamianos, Elias Iosif, and Shrikanth Narayanan. 2013. Distributional semantic models for affective text analysis. IEEE Transactions on Audio, Speech, and Language Processing 21(11):2379–2392. Tom´aˇs Mikolov. 2012. Statistical language models based on neural networks. Presentation at Google, Mountain View, 2nd April . Woodley Packard, M. Emily Bender, Jonathon Read, Stephan Oepen, and Rebecca Dridan. 2014. Simple negation scope resolution through deep parsing: A semantic solution to a semantic problem. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. pages 69–78. Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In ACL. pages 115– 124. Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations and trends in information retrieval 2(1-2):1–135. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In ACL. pages 79–86. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. EMNLP 12:1532–1543. Livia Polanyi and Annie Zaenen. 2006. Contextual valence shifters. In Computing attitude and affect in text: Theory and applications, Springer, pages 1–10. Qiao Qian, Bo Tian, Minlie Huang, Yang Liu, Xuan Zhu, and Xiaoyan Zhu. 2015. Learning tag embeddings and tag-specific composition functions in recursive neural network. In ACL. volume 1, pages 1365–1374. Raksha Sharma, Mohit Gupta, Astha Agarwal, and Pushpak Bhattacharyya. 2015. Adjective intensity and sentiment analysis. EMNLP2015 . Chaitanya Shivade, Marie-Catherine de Marneffe, Eric Folser-Lussier, and Albert Lai. 2015. Corpus-based discovery of semantic intensity scales. In Proceedings of NAACL-HTL . Richard Socher, Jeffrey Pennington, Eric H Huang, Andrew Y Ng, and Christopher D Manning. 2011. Semi-supervised recursive autoencoders for predicting sentiment distributions. In EMNLP. pages 151– 161. Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP. pages 1631–1642. Maite Taboada, Julian Brooke, Milan Tofiloski, Kimberly Voll, and Manfred Stede. 2011. Lexicon-based methods for sentiment analysis. Computational linguistics 37(2):267–307. Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075 . 1688 Zhiyang Teng, Duy-Tin Vo, and Yue Zhang. 2016. Context-sensitive lexicon features for neural sentiment analysis. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. pages 1629–1638. Peter D Turney. 2002. Thumbs up or thumbs down?: semantic orientation applied to unsupervised classification of reviews. In ACL. pages 417–424. Duy Tin Vo and Yue Zhang. 2016. Dont count, predict! an automatic approach to learning sentiment lexicons for short text. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. volume 2, pages 219–224. Feixiang Wang, Zhihua Zhang, and Man Lan. 2016. Ecnu at semeval-2016 task 7: An enhanced supervised learning method for lexicon sentiment intensity ranking. Proceedings of SemEval pages 491– 496. Wen-Li Wei, Chung-Hsien Wu, and Jen-Chun Lin. 2011. A regression approach to affective rating of chinese words from anew. In Affective Computing and Intelligent Interaction, Springer, pages 121– 131. Michael Wiegand, Alexandra Balahur, Benjamin Roth, Dietrich Klakow, and Andr´es Montoyo. 2010. A survey on the role of negation in sentiment analysis. In Proceedings of the workshop on negation and speculation in natural language processing. Association for Computational Linguistics, pages 60–68. Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phraselevel sentiment analysis. In EMNLP. pages 347– 354. Dani Yogatama and Noah A. Smith. 2014. Linguistic structured sparsity in text categorization. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. pages 786–796. Xiaodan Zhu, Hongyu Guo, Saif Mohammad, and Svetlana Kiritchenko. 2014. An empirical study on the effect of negation words on sentiment. In ACL. pages 304–313. Xiaodan Zhu, Parinaz Sobhani, and Hongyu Guo. 2015. Long short-term memory over recursive structures. In ICML. pages 1604–1612. Bowei Zou, Guodong Zhou, and Qiaoming Zhu. 2013. Tree kernel-based negation and speculation scope detection with structured syntactic parse features. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. pages 968–976. 1689
2017
154
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1690–1700 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1155 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1690–1700 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1155 Sarcasm SIGN: Interpreting Sarcasm with Sentiment Based Monolingual Machine Translation Lotem Peled and Roi Reichart Faculty of Industrial Engineering and Management, Technion, IIT [email protected], [email protected] Abstract Sarcasm is a form of speech in which speakers say the opposite of what they truly mean in order to convey a strong sentiment. In other words, ”Sarcasm is the giant chasm between what I say, and the person who doesn’t get it.”. In this paper we present the novel task of sarcasm interpretation, defined as the generation of a non-sarcastic utterance conveying the same message as the original sarcastic one. We introduce a novel dataset of 3000 sarcastic tweets, each interpreted by five human judges. Addressing the task as monolingual machine translation (MT), we experiment with MT algorithms and evaluation measures. We then present SIGN: an MT based sarcasm interpretation algorithm that targets sentiment words, a defining element of textual sarcasm. We show that while the scores of n-gram based automatic measures are similar for all interpretation models, SIGN’s interpretations are scored higher by humans for adequacy and sentiment polarity. We conclude with a discussion on future research directions for our new task.1 1 Introduction Sarcasm is a sophisticated form of communication in which speakers convey their message in an indirect way. It is defined in the MerriamWebster dictionary (Merriam-Webster, 1983) as the use of words that mean the opposite of what 1Our dataset, consisting of 3000 sarcastic tweets each augmented with five interpretations, is available in the project page: https://github.com/Lotemp/ SarcasmSIGN. The page also contains the sarcasm interpretation guidelines, the code of the SIGN algorithms and other materials related to this project. one would really want to say in order to insult someone, to show irritation, or to be funny. Considering this definition, it is not surprising to find frequent use of sarcastic language in opinionated user generated content, in environments such as Twitter, Facebook, Reddit and many more. In textual communication, knowledge about the speaker’s intent is necessary in order to fully understand and interpret sarcasm. Consider, for example, the sentence ”what a wonderful day”. A literal analysis of this sentence demonstrates a positive experience, due to the use of the word wonderful. However, if we knew that the sentence was meant sarcastically, wonderful would turn into a word of a strong negative sentiment. In spoken language, sarcastic utterances are often accompanied by a certain tone of voice which points out the intent of the speaker, whereas in textual communication, sarcasm is inherently ambiguous, and its identification and interpretation may be challenging even for humans. In this paper we present the novel task of interpretation of sarcastic utterances. We define the purpose of the interpretation task as the capability to generate a non-sarcastic utterance that captures the meaning behind the original sarcastic text. Our work currently targets the Twitter domain since it is a medium in which sarcasm is prevalent, and it allows us to focus on the interpretation of tweets marked with the content tag #sarcasm. And so, for example, given the tweet ”how I love Mondays. #sarcasm” we would like our system to generate interpretations such as ”how I hate Mondays” or ”I really hate Mondays”. In order to learn such interpretations, we constructed a parallel corpus of 3000 sarcastic tweets, each of which has five non-sarcastic interpretations (Section 3). Our task is complex since sarcasm can be expressed in many forms, it is ambiguous in nature and its understanding may require world knowl1690 edge. Following are several examples taken from our corpus: 1. loving life so much right now. #sarcasm 2. Way to go California! #sarcasm 3. Great, a choice between two excellent candidates, Donald Trump or Hillary Clinton. #sarcasm In example (1) it is quite straightforward to see the exaggerated positive sentiment used in order to convey strong negative feelings. Examples (2) and (3), however, do not contain any excessive sentiment. Instead, previous knowledge is required if one wishes to fully understand and interpret what went wrong with California, or who Hillary Clinton and Donald Trump are. Since sarcasm is a refined and indirect form of speech, its interpretation may be challenging for certain populations. For example, studies show that children with deafness, autism or Asperger’s Syndrome struggle with non literal communication such as sarcastic language (Peterson et al., 2012; Kimhi, 2014). Moreover, since sarcasm transforms the polarity of an apparently positive or negative expression into its opposite, it poses a challenge for automatic systems for opinion mining, sentiment analysis and extractive summarization (Popescu et al., 2005; Pang and Lee, 2008; Wiebe et al., 2004). Extracting the honest meaning behind the sarcasm may alleviate such issues. In order to design an automatic sarcasm interpretation system, we first rely on previous work in established similar tasks (section 2), particularly machine translation (MT), borrowing algorithms as well as evaluation measures. In section 4 we discuss the automatic evaluation measures we apply in our work and present human based measures for: (a) the fluency of a generated nonsarcastic utterance, (b) its adequacy as interpretation of the original sarcastic tweet’s meaning, and (c) whether or not it captures the sentiment of the original tweet. Then, in section 5, we explore the performance of prominent phrase-based and neural MT systems on our task in development data experiments. We next present the Sarcasm SIGN (Sarcasm Sentimental Interpretation GeNerator, section 6), our novel MT based algorithm which puts a special emphasis on sentiment words. Lastly, in Section 7 we assess the performance of the various algorithms and show that while they perform similarly in terms of automatic MT evaluation, SIGN is superior according to the human measures. We conclude with a discussion on future research directions for our task, regarding both algorithms and evaluation. 2 Related Work The use of irony and sarcasm has been well studied in the linguistics (Muecke, 1982; Stingfellow, 1994; Gibbs and Colston, 2007) and the psychology (Shamay-Tsoory et al., 2005; Peterson et al., 2012) literature. In computational work, the interest in sarcasm has dramatically increased over the past few years. This is probably due to factors such as the rapid growth in user generated content on the web, in which sarcasm is used excessively (Maynard et al., 2012; Kaplan and Haenlein, 2011; Bamman and Smith, 2015; Wang, 2013) and the challenge that sarcasm poses for opinion mining and sentiment analysis systems (Pang and Lee, 2008; Maynard and Greenwood, 2014). Despite this rising interest, and despite many works that deal with sarcasm identification (Tsur et al., 2010; Davidov et al., 2010; Gonz´alez-Ib´anez et al., 2011; Riloff et al., 2013; Barbieri et al., 2014), to the best of our knowledge, generation of sarcasm interpretations has not been previously attempted. Therefore, the following sections are dedicated to previous work from neighboring NLP fields which are relevant to our work: sarcasm detection, MT, paraphrasing and text summarization. Sarcasm Detection Recent computational work on sarcasm revolves mainly around detection. Due to the large volume of detection work, we survey only several representative examples. Tsur et al. (2010) and Davidov et al. (2010) presented a semi-supervised approach for detecting irony and sarcasm in product-reviews and tweets, where features are based on ironic speech patterns extracted from a labeled dataset. Gonz´alez-Ib´anez et al. (2011) used lexical and pragmatic features, e.g. emojis and whether the utterance is a comment to another person, in order to train a classifier that distinguishes sarcastic utterances from tweets of positive and negative sentiment. Riloff et al. (2013) observed that a certain type of sarcasm is characterized by a contrast between a positive sentiment and a negative situation. Consequently, they described a bootstrapping algorithm that learns distinctive phrases connected to negative situations along with a positive sentiment and used these phrases to train their classifier. Barbieri et al. (2014) avoided using word patterns and 1691 instead employed features such as the length and sentiment of the tweet, and the use of rare words. Despite the differences between detection and interpretation, this line of work is highly relevant to ours in terms of feature design. Moreover, it presents fundamental notions, such as the sentiment polarity of the sarcastic utterance and of its interpretation, that we adopt. Finally, when utterances are not marked for sarcasm as in the Twitter domain, or when these labels are not reliable, detection is a necessary step before interpretation. Machine Translation We approach our task as one of monolingual MT, where we translate sarcastic English into non-sarcastic English. Therefore, our starting point is the application of MT techniques and evaluation measures. The three major approaches to MT are phrase based (Koehn et al., 2007), syntax based (Koehn et al., 2003) and the recent neural approach. For automatic MT evaluation, often an n-gram co-occurrence based scoring is performed in order to measure the lexical closeness between a candidate and a reference translations. Example measures are NIST (Doddington, 2002), METEOR (Denkowski and Lavie, 2011), and the widely used BLEU (Papineni et al., 2002), which represents precision: the fraction of n-grams from the machine generated translation that also appear in the human reference. Here we employ the phrase based Moses system (Koehn et al., 2007) and an RNN-encoder-decoder architecture, based on Cho et al. (2014). Later we will show that these algorithms can be further improved and will explore the quality of the MT evaluation measures in the context of our task. Paraphrasing and Summarization Tasks such as paraphrasing and summarization are often addressed as monolingual MT, and so they are close in nature to our task. Quirk et al. (2004) proposed a model of paraphrasing based on monolingual MT, and utilized alignment models used in the Moses translation system (Koehn et al., 2007; Wubben et al., 2010; Bannard and Callison-Burch, 2005). Xu et al. (2015) presented the task of paraphrase generation while targeting a particular writing style, specifically paraphrasing modern English into Shakespearean English, and approached it with phrase based MT. Work on paraphrasing and summarization is often evaluated using MT evaluation measures such as BLEU. As BLEU is precision-oriented, complementary recall-oriented measures are often used as well. A prominent example is ROUGE (Lin, 2004), a family of measures used mostly for evaluation in automatic summarization: candidate summaries are scored according to the fraction of n-grams from the human references they contain. We also utilize PINC (Chen and Dolan, 2011), a measure which rewards paraphrases for being different from their source, by introducing new n-grams. PINC is often combined with BLEU due to their complementary nature: while PINC rewards n-gram novelty, BLEU rewards similarity to the reference. The highest correlation with human judgments is achieved by the product of PINC with a sigmoid function of BLEU (Chen and Dolan, 2011). 3 A Parallel Sarcastic Tweets Corpus To properly investigate our task, we collected a dataset, first of its kind, of sarcastic tweets and their non-sarcastic (honest) interpretations. This data, as well as the instructions provided for our human judges, will be made publicly available and will hopefully provide a basis for future work regarding sarcasm on Twitter. Despite the focus of the current work on the Twitter domain, we consider our task as a more general one, and hope that our discussion, observations and algorithms will be beneficial for other domains as well. Using the Twitter API2, we collected tweets marked with the content tag #sarcasm, posted between Januray and June of 2016. Following Tsur et al. (2010), Gonz´alez-Ib´anez et al. (2011) and Bamman and Smith (2015), we address the problem of noisy tweets with automatic filtering: we remove all tweets not written in English, discard retweets (tweets that have been forwarded or shared) and remove tweets containing URLs or images, so that the sarcasm in the tweet regards to the text only and not to an image or a link. This results in 3000 sarcastic tweets containing text only, where the average sarcastic tweet length is 13.87 utterances, the average interpretation length is 12.10 words and the vocabulary size is 8788 unique words. In order to obtain honest interpretations for our sarcastic tweets, we used Fiverr3 – a platform for selling and purchasing services from independent suppliers (also referred to as workers). We em2http://apiwiki.twitter.com 3https://www.fiverr.com 1692 Sarcastic Tweets Honest Interpretations What a great way to end my night. #sarcasm 1. Such a bad ending to my night 2. Oh what a great way to ruin my night 3. What a horrible way to end a night 4. Not a good way to end the night 5. Well that wasn’t the night I was hoping for Staying up till 2:30 was a brilliant idea, very productive #sarcasm 1. Bad idea staying up late, not very productive 2. It was not smart or productive for me to stay up so late 3. Staying up till 2:30 was not a brilliant idea, very non-productive 4. I need to go to bed on time 5. Staying up till 2:30 was completely useless Table 1: Examples from our parallel sarcastic tweet corpus. ployed ten Fiverr workers, half of them from the field of comedy writing, and half from the field of literature paraphrasing. The chosen workers were made sure to have an active Twitter account, in order to ensure their acquaintance with social networks and with Twitter’s colorful language (hashtags, common acronyms such as LOL, etc.). We then randomly divided our tweet corpus to two batches of size 1500 each, and randomly assigned five workers to each batch. We instructed the workers to translate each sarcastic tweet into a non sarcastic utterance, while maintaining the original meaning. We encouraged the workers to use external knowledge sources (such as Google) if they came across a subject they were not familiar with, or if the sarcasm was unclear to them. Although our dataset consists only of tweets that were marked with the hashtag #sarcasm, some of these tweets were not identified as sarcastic by all or some of our Fiverr workers. In such cases the workers were instructed to keep the original tweet unchanged (i.e, uninterpreted). We keep such tweets in our dataset since we expect a sarcasm interpretation system to be able to recognize non-sarcastic utterances in its input, and to leave them in their original form. Table 1 presents two examples from our corpus. The table demonstrates the tendency of the workers to generally agree on the core meaning of the sarcastic tweets. Yet, since sarcasm is inherently vague, it is not surprising that the interpretations differ from one worker to another. For example, some workers change only one or two words from the original sarcastic tweet, while others rephrase the entire utterance. We regard this as beneficial, since it brings a natural, human variance into the task. This variance makes the evaluation of automatic sarcasm interpretation algorithms challenging, as we further discuss in the next section. 4 Evaluation Measures As mentioned above, in certain cases world knowledge is mandatory in order to correctly evaluate sarcasm interpretations. For example, in the case of the second sarcastic tweet in table 1, we need to know that 2:30 is considered a late hour so that staying up till 2:30 and staying up late would be considered equivalent despite the lexical difference. Furthermore, we notice that transforming a sarcastic utterance into a non sarcastic one often requires to change a small number of words. For example, a single word change in the sarcastic tweet ”How I love Mondays. #sarcasm” leads to the non-sarcastic utterance How I hate Mondays. This is not typical for MT, where usually the entire source sentence is translated to a new sentence in the target language and we would expect lexical similarity between the machine generated translation and the human reference it is compared to. This raises a doubt as to whether n-gram based MT evaluation measures such as the aforementioned are suitable for our task. We hence asses the quality of an interpretation using automatic evaluation measures from the tasks of MT, paraphrasing, and summarization (Section 2), and compare these measures to human-based measures. Automatic Measures We use BLEU and ROUGE as measures of n-gram precision and recall, respectively. We report scores of ROUGE-1, ROUGE-2 and ROUGE-L (recall based on unigrams, bigrams and longest common subsequence between candidate and reference, respectively). In order to asses the n-gram novelty of interpretations (i.e, difference from the source), we report PINC and PINC∗sigmoid(BLEU) (see Section 2). Human judgments We employed an additional group of five Fiverr workers and asked them to score each generated interpretations with two scores on a 1-7 scale, 7 being the best. The scores 1693 Sarcastic Tweet Moses Interpretation Neural Interpretation Boy , am I glad the rain’s here #sarcasm Boy, I’m so annoyed that the rain is here I’m not glad to go today Another night of work, Oh, the joy #sarcasm Another night of work, Ugh, unbearable Another night, I don’t like it Being stuck in an airport is fun #sarcasm Be stuck in an airport is not fun Yay, stuck at the office again You’re the best. #sarcasm You’re the best You’re my best friend Table 2: Sarcasm interpretations generated by Moses and by the RNN. Evaluation Measure Moses RNN Precision Oriented BLEU 62.91 41.05 Novelty Oriented PINC 51.81 76.45 PINC∗sigmoid(BLEU) 33.79 45.96 Recall Oriented ROUGE-1 66.44 42.20 ROUGE-2 41.03 29.97 ROUGE-l 65.31 40.87 Human Judgments Fluency 6.46 5.12 Adequacy 2.54 2.08 % correct sentiment 28.84 17.93 Table 3: Development data results for MT models. are: adequacy: the degree to which the interpretation captures the meaning of the original tweet; and fluency: how readable the interpretation is. In addition, reasoning that a high quality interpretation is one that captures the true intent of the sarcastic utterance by using words suitable to its sentiment, we ask the workers to assign the interpretation with a binary score indicating whether the sentiment presented in the interpretation agrees with the sentiment of the original sarcastic tweet.4 The human measures enjoy high agreement levels between the human judges. The averaged root mean squared error calculated on the test set across all pairs of judges and across the various algorithms we experiment with are: 1.44 for fluency and 1.15 for adequacy. For sentiment scores the averaged agreement at the same setup is 93.2%. 5 Sarcasm Interpretations as MT As our task is about the generation of one English sentence given another, a natural starting point is treating it as monolingual MT. We hence begin with utilizing two widely used MT systems, representing two different approaches: Phrase Based MT vs. Neural MT. We then analyze the performance of these two systems, and based on our conclusions we design our SIGN model. 4For example, we consider ”Best day ever #sarcasm” and its interpretation ”Worst day ever” to agree on the sentiment, despite the use of opposite sentiment words. Phrase Based MT We employ Moses5, using word alignments extracted by GIZA++ (Och and Ney, 2003) and symmetrized with the grow-diagfinal strategy. We use phrases of up to 8 words to build our phrase table, and do not filter sentences according to length since tweets contain at most 140 characters. We employ the KenLM algorithm (Heafield, 2011) for language modeling, and train it on the non-sarcastic tweet interpretations (the target side of the parallel corpus). Neural Machine Translation We use GroundHog, a publicly available implementation of an RNN encoder-decoder, with LSTM hidden states.6 Our encoder and decoder contain 250 hidden units each. We use the minibatch stochastic gradient descent (SGD) algorithm together with Adadelta (Zeiler, 2012) to train each model, where each SGD update is computed using a minibatch of 16 utterances. Following Sutskever et al. (2014), we use beam search for test time decoding. Henceforth we refer to this system as RNN. Performance Analysis We divide our corpus into training, development and test sets of sizes 2400, 300 and 300 respectively. We train Moses and the RNN on the training set and tune their parameters on the development set. Table 3 presents development data results, as these are preliminary experiments that aim to asses the compatibility of MT algorithms to our task. Moses scores much higher in terms of BLEU and ROUGE, meaning that compared to the RNN its interpretations capture more n-grams appearing in the human references while maintaining high precision. The RNN outscores Moses in terms of PINC and PINC∗sigmoid(BLEU), meaning that its interpretations are more novel, in terms of ngrams. This alone might not be a negative trait; However, according to human judgments Moses performs better in terms of fluency, adequacy and sentiment, and so the novelty of the RNN’s interpretations does not necessarily contribute to their 5http://www.statmt.org/moses 6https://github.com/lisa-groundhog/ GroundHog 1694 “How I love Mondays # sarcasm “How I cluster-i Mondays # sarcasm MOSES love like ... cluster-i “How I hate Mondays “How I cluster-j Mondays # sarcasm hate despise ... cluster-j de-clustering clustering Figure 1: An illustration of the application of SIGN to the tweet ”How I love Mondays # sarcasm”. quality, and even possibly reduces it. Table 2 illustrates several examples of the interpretations generated by both Moses and the RNN. While the interpretations generated by the RNN are readable, they generally do not maintain the meaning of the original tweet. We believe that this is the result of the neural network overfitting the training set, despite regularization and dropout layers, probably due to the relatively small training set size. In light of these results when we experiment with the SIGN algorithm (Section 7), we employ Moses as its MT component. The final example of Table 2 is representative of cases where both Moses and the RNN fail to capture the sarcastic sense of the tweet, incorrectly interpreting it or leaving it unchanged. In order to deal with such cases, we wish to utilize a property typical of sarcastic language. Sarcasm is mostly used to convey a certain emotion by using strong sentiment words that express the exact opposite of their literal meaning. Hence, many sarcastic utterances can be correctly interpreted by keeping most of their words, replacing only sentiment words with expressions of the opposite sentiment. For example, the sarcasm in the utterance ”You’re the best. #sarcasm” is hidden in best, a word of a strong positive sentiment. If we transform this word into a word of the opposite sentiment, such as worst, then we get a non-sarcastic utterance with the correct sentiment. We next present the Sarcasm SIGN (Sarcasm Sentimental Interpretation GeNerator), an algorithm which capitalizes on sentiment words in order to produce accurate interpretations. 6 The Sarcasm SIGN Algorithm SIGN (Figure 1) targets sentiment words in sarcastic utterances. First, it clusters sentiment words according to semantic relatedness. Then, each senPositive Clusters merit, wonder, props, praise, congratulations.. patience, dignity, truth, chivalry, rationality... Negative Clusters hideous, horrible, nasty, obnoxious, scary, pathetic... shame, sadness, sorrow, fear, disappointment, regret, danger... Table 4: Examples of two positive and two negative clusters created by the SIGN algorithm. timent word is replaced with its cluster 7 and the transformed data is fed into an MT system (Moses in this work), at both its training and test phases. Consequently, at test time the MT system outputs non-sarcastic utterances with clusters replacing sentiment words. Finally, SIGN performs a declustering process on these MT outputs, replacing sentiment clusters with suitable words. In order to detect the sentiment of words, we turn to SentiWordNet (Esuli and Sebastiani, 2006), a lexical resource based on WordNet (Miller et al., 1990). Using SentiWordNet’s positivity and negativity scores, we collect from our training data a set of distinctly positive words (∼70) and a set of distinctly negative words (∼160).8 We then utilize the pre-trained dependency-based word embeddings of Levy and Goldberg (2014)9 and cluster each set using the k-means algorithm with L2 distance. We aim to have ten words on average in each cluster, and so the positive set is clustered into 7 clusters, and the negative set into 16 clusters. Table 4 presents examples from our clusters. Upon receiving a sarcastic tweet, at both training and test, SIGN searches it for sentiment words according to the positive and negative sets. If such 7This means that we replace a word with cluster-j where j is the number of the cluster to which the word belongs. 8The scores are in the [0,1] range. We set the threshold of 0.6 for both distinctly positive and distinctly negative words. 9https://levyomer.wordpress.com/2014/ 04/25/dependency-based-word-embeddings/. We choose these embeddings since they are believed to better capture the relations between a word and its context, having been trained on dependency-parsed sentences. 1695 Evaluation Measure Moses SIGN-centroid SIGN-context SIGN-oracle Precision Oriented BLEU 65.24 63.52 66.96 67.49 Novelty Oriented PINC 45.92 47.11 46.65 46.10 PINC∗sigmoid(BLEU) 30.21 30.79 31.13 30.54 Recall Oriented ROUGE-1 70.26 68.43 69.67 70.34 ROUGE-2 42.18 40.34 40.96 42.81 ROUGE-l 69.82 68.24 69.98 70.01 Table 5: Test data results with automatic evaluation measures. a word is found, it is replaced with its cluster. For example, given the sentence ”How I love Mondays. #sarcasm”, love will be recognized as a positive sentiment word, and the sarcastic tweet will become: ”How I cluster-i Mondays. #sarcasm” where i is the cluster number of the word love. During training, this process is also applied to the non-sarcastic references. And so, if one such reference is ”I dislike Mondays.”, then dislike will be identified and the reference will become ”I cluster-j Mondays.”, where j is the cluster number of the word dislike. Moses is then trained on these new representations of the corpus, using the exact same setup as before. This training process produces a mapping between positive and negative clusters, and outputs sarcastic interpretations with clustered sentiment words (e.g, ”I cluster-j Mondays.”). At test time, after Moses generates an utterance containing clusters, a de-clustering process takes place: the clusters are replaced with the appropriate sentiment words. We experiment with several de-clustering approaches: (1) SIGN-centroid: the chosen sentiment word will be the one closest to the centroid of cluster j. For example in the tweet ”I cluster-j Mondays.”, the sentiment word closest to the centroid of cluster j will be chosen; (2) SIGNcontext: the cluster is replaced with its word that has the highest average Pointwise Mutual Information (PMI) with the words in a symmetric context window of size 3 around the cluster’s location in the output. For example, for ”I cluster-j Mondays.”, the sentiment word from cluster j which has the highest average PMI with the words in {’I’,’Mondays’} will be chosen. The PMI values are computed on the training data; and (3) SIGNOracle: an upper bound where a person manually chooses the most suitable word from the cluster. We expect this process to improve the quality of sarcasm interpretations in two aspects. First, as mentioned earlier, sarcastic tweets often differ from their non sarcastic interpretations in a small Fluency Adequacy % correct sentiment % changed Moses 6.67 2.55 25.7 42.3 SIGN-Centroid 6.38 3.23* 42.2* 67.4 SIGN-Context 6.66 3.61* 46.2* 68.5 SIGN-Oracle 6.69 3.67* 46.8* 68.8 Table 6: Test set results with human measures. %changed provides the fraction of tweets that were changed during interpretation (i.e. the tweet and its interpretation are not identical). In cases where one of our models presents significant improvement over Moses, the results are decorated with a star. Statistical significance is tested with the paired t-test for fluency and adequacy, and with the McNemar paired test for labeling disagreements (Gillick and Cox, 1989) for % correct sentiment, in both cases with p < 0.05. number of sentiment words (sometimes even in a single word). SIGN should help highlight the sentiment words most in need of interpretation. Second, under the pre-processing SIGN performs to the input examples of Moses, the latter is inclined to learn a mapping from positive to negative clusters, and vice versa. This is likely to encourage the Moses output to generate outputs of the same sentiment as the original sarcastic tweet, but with honest sentiment words. For example, if the sarcastic tweet expresses a negative sentiment with strong positive words, the non-sarcastic interpretation will express this negative sentiment with negative words, thus stripping away the sarcasm. 7 Experiments and Results We experiment with SIGN and the Moses and RNN baselines at the same setup of section 5. We report test set results for automatic and human measures, in Tables 5 and 6 respectively. As in the development data experiments (Table 3), the RNN presents critically low adequacy scores of 2.11 across the entire test set and of 1.89 in cases where the interpretation and the tweet differ. This, along with its low fluency scores (5.74 and 5.43 1696 respectively) and its very low BLEU and ROUGE scores make us deem this model immature for our task and dataset, hence we exclude it from this section’s tables and do not discuss it further. In terms of automatic evaluation (Table 5), SIGN and Moses do not perform significantly different. When it comes to human evaluation (Table 6) however, SIGN-context presents substantial gains. While for fluency Moses and SIGN-context perform similarly, SIGN-context performs much better in terms of adequacy and the percentage of tweets with the correct sentiment. The differences are substantial as well as statistically significant: adequacy of 3.61 for SIGN-context compared to 2.55 of Moses, and correct sentiment for 46.2% of the SIGN interpretations, compared to only 25.7% of the Moses interpretations. Table 6 further provides an initial explanation to the improvement of SIGN over Moses: Moses tends to keep interpretations identical to the original sarcastic tweet, altering them in only 42.3% of the cases, 10 while SIGN-context’s interpretations differ from the original sarcastic tweet in 68.5% of the cases, which comes closer to the 73.8% in the gold standard human interpretations. If for each of the algorithms we only regard to interpretations that differ from the original sarcastic tweet, the differences between the models are less substantial. Nonetheless, SIGN-context still presents improvement by correctly changing sentiment in 67.5% of the cases compared to 60.8% for Moses. Both tables consistently show that the contextbased selection strategy of SIGN outperforms the centroid alternative. This makes sense as, being context-ignorant, SIGN-centroid might produce non-fluent or inadequate interpretations for a given context. For example, the tweet ”Also gotta move a piano as well. joy #sarcasm” is changed to ”Also gotta move a piano as well. bummer” by SIGN-context, while SIGN-centroid changes it to the less appropriate ”Also gotta move a piano as well. boring”. Nonetheless, even this naive de-clustering approach substantially improves adequacy and sentiment accuracy over Moses. Finally, comparison to SIGN-oracle reveals that the context selection strategy is not far from human performance with respect to both automatic and human evaluation measures. Still, some gain can be achieved, especially for the human measures on tweets that were changed at interpreta10We elaborate on this in section 8. tion. This indicates that SIGN can improve mostly through a better clustering of sentiment words, rather than through a better selection strategy. 8 Discussion and Future Work Automatic vs. Human Measures The performance gap between Moses and SIGN may stem from the difference in their optimization criteria. Moses aims to optimize the BLEU score and given the overall lexical similarity between the original tweets and their interpretations, it therefore tends to keep them identical. SIGN, in contrast, targets sentiment words and changes them frequently. Consequently, we do not observe substantial differences between the algorithms in the automatic measures that are mostly based on ngram differences between the source and the interpretation. Likewise, the human fluency measure that accounts for the readability of the interpretation is not seriously affected by the translation process. When it comes to the human adequacy and sentiment measures, which account for the understanding of the tweet’s meaning, SIGN reveals its power and demonstrates much better performance compared to Moses. To further understand the relationship between the automatic and the human based measures we computed the Pearson correlations for each pair of (automatic, human) measures. We observe that all correlation values are low (up to 0.12 for fluency, 0.13-0.18 for sentiment and 0.19-0.24 for adequacy). Moreover, for fluency the correlation values are insignificant (using a correlation significance t-test with p = 0.05). We believe this indicates that these automatic measures do not provide appropriate evaluation for our task. Designing automatic measures is hence left for future research. Sarcasm Interpretation as Sentiment Based Monolingual MT: Strengths and Weaknesses The SIGN models’ strength is revealed when interpreting sarcastic tweets with strong sentiment words, transforming expressions such as ”Audits are a blast to do #sarcasm” and ”Being stuck in an airport is fun #sarcasm” into ”Audits are a bummer to do” and ”Being stuck in an airport is boring”, respectively. Even when there are no words of strong sentiment, the MT component of SIGN still performs well, interpreting tweets such as ”the Cavs aren’t getting any calls, this is new #sarcasm” into ”the Cavs aren’t getting any calls, as usuall”. 1697 The SIGN models perform well even in cases where there are several sentiment words but not all of them require change. For example, for the sarcastic tweet ”Constantly being irritated, anxious and depressed is a great feeling! #sarcasm”, SIGN-context produces the adequate interpretation: ”Constantly being irritated, anxious and depressed is a terrible feeling”. Future research directions rise from cases in which the SIGN models left the tweet unchanged. One prominent set of examples consists of tweets that require world knowledge for correct interpretation. Consider the tweet ”Can you imagine if Lebron had help? #sarcasm”. The model requires knowledge of who Lebron is and what kind of help he needs in order to fully understand and interpret the sarcasm. In practice the SIGN models leave this tweet untouched. Another set of examples consists of tweets that lack an explicit sentiment word, for example, the tweet ”Clear example they made of Sharapova then, ey? #sarcasm”. While for a human reader it is apparent that the author means a clear example was not made of Sharapova, the lack of strong sentiment words results in all SIGN models leaving this tweet uninterpreted. Finally, tweets that present sentiment in phrases or slang words are particularly challenging for our approach which relies on the identification and clustering of sentiment words. Consider, for example, the following two cases: (a) the sarcastic tweet ”Can’t wait until tomorrow #sarcasm”, where the positive sentiment is expressed in the phrase can’t wait; and (b) the sarcastic tweet ”another shooting? yeah we totally need to make guns easier for people to get #sarcasm”, where the word totally receives a strong sentiment despite its normal use in language. While we believe that identifying the role of can’t wait and of totally in the sentiment of the above tweets can be a key to properly interpreting them, our approach that relies on a sentiment word lexicon is challenged by such cases. Summary We presented a first attempt to approach the problem of sarcasm interpretation. Our major contributions are: • Construction of a dataset, first of its kind, that consists of 3000 tweets each augmented with five non-sarcastic interpretations generated by human experts. • Discussion of the proper evaluation in our task. We proposed a battery of human measures and compared their performance to the accepted measures in related fields such as machine translation. • An algorithmic approach: sentiment based monolingual machine translation. We demonstrated the strength of our approach and pointed on cases that are currently beyond its reach. Several challenges are still to be addressed in future research so that sarcasm interpretation can be performed in a fully automatic manner. These include the design of appropriate automatic evaluation measures as well as improving the algorithmic approach so that it can take world knowledge into account and deal with cases where the sentiment of the input tweet is not expressed with a clear sentiment words. We are releasing our dataset with its sarcasm interpretation guidelines, the code of the SIGN algorithms, and the output of the various algorithms considered in this paper (https://github. com/Lotemp/SarcasmSIGN). We hope this new resource will help researchers make further progress on this new task. References David Bamman and Noah A Smith. 2015. Contextualized sarcasm detection on twitter. In Ninth International AAAI Conference on Web and Social Media. http://dblp.unitrier.de/rec/bib/conf/icwsm/BammanS15. Colin Bannard and Chris Callison-Burch. 2005. Paraphrasing with bilingual parallel corpora. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics, pages 597–604. www.aclweb.org/anthology/P05-1074. Francesco Barbieri, Horacio Saggion, and Francesco Ronzano. 2014. Proceedings of the 5th workshop on computational approaches to subjectivity, sentiment and social media analysis. pages 50–58. https://doi.org/10.3115/v1/W14-2609. David L Chen and William B Dolan. 2011. Collecting highly parallel data for paraphrase evaluation. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1. Association for Computational Linguistics, pages 190–200. www.aclweb.org/anthology/P11-1020. 1698 Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proc. of EMNLP. https://doi.org/10.3115/v1/d14-1179. Dmitry Davidov, Oren Tsur, and Ari Rappoport. 2010. Semi-supervised recognition of sarcastic sentences in twitter and amazon. In Proceedings of the fourteenth conference on computational natural language learning. Association for Computational Linguistics, pages 107–116. https://www.aclweb.org/anthology/W/W10/W102914.pdf. Michael Denkowski and Alon Lavie. 2011. Meteor 1.3: Automatic metric for reliable optimization and evaluation of machine translation systems. In Proceedings of the Sixth Workshop on Statistical Machine Translation. Association for Computational Linguistics, pages 85–91. www.aclweb.org/anthology/W11-2107. George Doddington. 2002. Automatic evaluation of machine translation quality using ngram co-occurrence statistics. In Proceedings of the second international conference on Human Language Technology Research. Morgan Kaufmann Publishers Inc., pages 138–145. https://doi.org/10.3115/1289189.1289273. Andrea Esuli and Fabrizio Sebastiani. 2006. Sentiwordnet: A publicly available lexical resource for opinion mining. In Proceedings of LREC. http://aclweb.org/anthology/L06-1225. Raymond W Gibbs and Herbert L Colston. 2007. Irony in language and thought: A cognitive science reader. Psychology Press. Laurence Gillick and Stephen J Cox. 1989. Some statistical issues in the comparison of speech recognition algorithms. In Proc. of ICASSP. IEEE. https://doi.org/10.1109/ICASSP.1989.266481. Roberto Gonz´alez-Ib´anez, Smaranda Muresan, and Nina Wacholder. 2011. Identifying sarcasm in twitter: a closer look. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2. Association for Computational Linguistics, pages 581–586. http://www.aclweb.org/anthology/P11-2102. Kenneth Heafield. 2011. Kenlm: Faster and smaller language model queries. In Proceedings of the Sixth Workshop on Statistical Machine Translation. Association for Computational Linguistics, pages 187– 197. www.aclweb.org/anthology/W11-2123. Andreas M Kaplan and Michael Haenlein. 2011. Two hearts in three-quarter time: How to waltz the social media/viral marketing dance. Business Horizons 54(3):253–263. https://doi.org/10.1016/j.bushor.2011.01.006. Yael Kimhi. 2014. Theory of mind abilities and deficits in autism spectrum disorders. Topics in Language Disorders 34(4):329–343. https://doi.org/10.1097/tld.0000000000000033. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions. Association for Computational Linguistics, pages 177– 180. https://www.aclweb.org/anthology/P/P07/P072.pdf. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1. Association for Computational Linguistics, pages 48–54. www.aclweb.org/anthology/N/N03/N03-1017.ps. Omer Levy and Yoav Goldberg. 2014. Dependencybased word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, pages 302–308. https://doi.org/10.3115/v1/P14-2050. Chin-Yew Lin. 2004. Text summarization branches out (acl-04 workshop). http://aclweb.org/anthology/W04-1013. Diana Maynard, Kalina Bontcheva, and Dominic Rout. 2012. Challenges in developing opinion mining tools for social media. Proceedings of the@ NLP can u tag# usergeneratedcontent (LREC-12 workshop) pages 15–22. Diana Maynard and Mark A Greenwood. 2014. Who cares about sarcastic tweets? investigating the impact of sarcasm on sentiment analysis. In LREC. pages 4238–4243. http://dblp.unitrier.de/rec/bib/conf/lrec/MaynardG14. Inc Merriam-Webster. 1983. Webster’s ninth new collegiate dictionary. Merriam-Webster. https://doi.org/10.1353/dic.1984.0017. George A Miller, Richard Beckwith, Christiane Fellbaum, Derek Gross, and Katherine J Miller. 1990. Introduction to wordnet: An on-line lexical database. International journal of lexicography 3(4):235–244. https://doi.org/10.1093/ijl/3.4.235. Douglas Colin Muecke. 1982. Irony and the Ironic. Methuen. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational linguistics 29(1):19–51. http://aclweb.org/anthology/J03-1002. 1699 Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations and trends in information retrieval 2(1-2):1–135. http://dblp.unitrier.de/rec/bib/journals/ftir/PangL07. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics. Association for Computational Linguistics, pages 311–318. www.aclweb.org/anthology/P02-1040.pdf. Candida C Peterson, Henry M Wellman, and Virginia Slaughter. 2012. The mind behind the message: Advancing theory-of-mind scales for typically developing children, and those with deafness, autism, or asperger syndrome. Child development 83(2):469–485. https://doi.org/10.1111/j.14678624.2011.01728.x. Ana-Maria Popescu, Bao Nguyen, and Oren Etzioni. 2005. Opine: Extracting product features and opinions from reviews. In Proceedings of HLT/EMNLP interactive demonstrations. Association for Computational Linguistics, pages 32–33. https://doi.org/10.3115/1225733.1225750. Chris Quirk, Chris Brockett, and William B Dolan. 2004. Proceedings of the 2004 conference on empirical methods in natural language processing. pages 142–149. http://aclweb.org/anthology/W04-3219. Ellen Riloff, Ashequl Qadir, Prafulla Surve, Lalindra De Silva, Nathan Gilbert, and Ruihong Huang. 2013. Sarcasm as contrast between a positive sentiment and negative situation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. pages 704–714. http://aclweb.org/anthology/D13-1066. SG Shamay-Tsoory, Rachel Tomer, and Judith Aharon-Peretz. 2005. The neuroanatomical basis of understanding sarcasm and its relationship to social cognition. Neuropsychology 19(3):288. https://doi.org/10.1037/0894-4105.19.3.288. FJ Stingfellow. 1994. The Meaning of Irony. New York: State University of NY. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems. pages 3104–3112. http://papers.nips.cc/paper/5346-sequence-tosequence-learning-with-neural-networks.pdf. Oren Tsur, Dmitry Davidov, and Ari Rappoport. 2010. Icwsm-a great catchy name: Semisupervised recognition of sarcastic sentences in online product reviews. In ICWSM. http://dblp.unitrier.de/rec/bib/conf/icwsm/TsurDR10. Po-Ya Angela Wang. 2013. # irony or# sarcasma quantitative and qualitative study based on twitter. In Proceedings of the 27th Pacific Asia Conference on Language, Information, and Computation (PACLIC 27). https://aclweb.org/anthology/Y/Y13/Y131035.pdf. Janyce Wiebe, Theresa Wilson, Rebecca Bruce, Matthew Bell, and Melanie Martin. 2004. Learning subjective language. Computational linguistics 30(3):277–308. http://aclweb.org/anthology/J043002. Sander Wubben, Antal Van Den Bosch, and Emiel Krahmer. 2010. Paraphrase generation as monolingual translation: Data and evaluation. In Proceedings of the 6th International Natural Language Generation Conference. Association for Computational Linguistics, pages 203–207. http://dblp.unitrier.de/rec/bib/conf/inlg/WubbenBK10. Wei Xu, Chris Callison-Burch, and William B Dolan. 2015. Semeval-2015 task 1: Paraphrase and semantic similarity in twitter (pit). Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015) https://doi.org/10.18653/v1/s152001. Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 http://dblp2.uni-trier.de/rec/bib/journals/corr/abs1212-5701. 1700
2017
155